mirror of
https://github.com/0glabs/0g-storage-node.git
synced 2025-04-04 15:35:18 +00:00
Compare commits
3 Commits
0cee5df235
...
dff854ae46
Author | SHA1 | Date | |
---|---|---|---|
![]() |
dff854ae46 | ||
![]() |
9eea71e97d | ||
![]() |
fd9c033176 |
1
.gitignore
vendored
1
.gitignore
vendored
@ -4,5 +4,6 @@
|
|||||||
/.idea
|
/.idea
|
||||||
tests/**/__pycache__
|
tests/**/__pycache__
|
||||||
tests/tmp/**
|
tests/tmp/**
|
||||||
|
tests/config/zgs
|
||||||
.vscode/*.json
|
.vscode/*.json
|
||||||
/0g-storage-contracts-dev
|
/0g-storage-contracts-dev
|
73
README.md
73
README.md
@ -2,69 +2,32 @@
|
|||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
0G Storage is the storage layer for the ZeroGravity data availability (DA) system. The 0G Storage layer holds three important features:
|
0G Storage is a decentralized data storage system designed to address the challenges of high-throughput and low-latency data storage and retrieval, in areas such as AI and gaming.
|
||||||
|
|
||||||
* Buit-in - It is natively built into the ZeroGravity DA system for data storage and retrieval.
|
In addition, it forms the storage layer for the 0G data availability (DA) system, with the cross-layer integration abstracted away from Rollup and AppChain builders.
|
||||||
* General purpose - It is designed to support atomic transactions, mutable kv stores as well as archive log systems to enable wide range of applications with various data types.
|
|
||||||
* Incentive - Instead of being just a decentralized database, 0G Storage introduces PoRA mining algorithm to incentivize storage network participants.
|
|
||||||
|
|
||||||
To dive deep into the technical details, continue reading [0G Storage Spec.](docs/)
|
## System Architecture
|
||||||
|
|
||||||
## Integration
|
0G Storage consists of two main components:
|
||||||
|
|
||||||
We provide a [SDK](https://github.com/0glabs/0g-js-storage-sdk) for users to easily integrate 0G Storage in their applications with the following features:
|
1. **Data Publishing Lane**: Ensures quick data availability and verification through the 0G Consensus network.
|
||||||
|
2. **Data Storage Lane**: Manages large data transfers and storage using an erasure-coding mechanism for redundancy and reliability.
|
||||||
|
|
||||||
* File Merkle Tree Class
|
Across the two lanes, 0G Storage supports the following features:
|
||||||
* Flow Contract Types
|
|
||||||
* RPC methods support
|
|
||||||
* File upload
|
|
||||||
* Support browser environment
|
|
||||||
* Tests for different environments (In Progress)
|
|
||||||
* File download (In Progress)
|
|
||||||
|
|
||||||
## Deployment
|
* **General Purpose Design**: Supports atomic transactions, mutable key-value stores, and archive log systems, enabling a wide range of applications with various data types.
|
||||||
|
* **Incentivized Participation**: Utilizes the PoRA (Proof of Random Access) mining algorithm to incentivize storage network participants.
|
||||||
|
|
||||||
Please refer to [Deployment](docs/run.md) page for detailed steps to compile and start a 0G Storage node.
|
For in-depth technical details about 0G Storage, please read our [Intro to 0G Storage](https://docs.0g.ai/og-storage).
|
||||||
|
|
||||||
## Test
|
## Documentation
|
||||||
|
|
||||||
### Prerequisites
|
- If you want to run a node, please refer to the [Running a Node](https://docs.0g.ai/run-a-node/storage-node) guide.
|
||||||
|
- If you want build a project using 0G storage, please refer to the [0G Storage SDK](https://docs.0g.ai/build-with-0g/storage-sdk) guide.
|
||||||
|
|
||||||
* Required python version: 3.8, 3.9, 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`).
|
## Support and Additional Resources
|
||||||
* Install dependencies under root folder: `pip3 install -r requirements.txt`
|
We want to do everything we can to help you be successful while working on your contribution and projects. Here you'll find various resources and communities that may help you complete a project or contribute to 0G.
|
||||||
|
|
||||||
### Dependencies
|
### Communities
|
||||||
|
- [0G Telegram](https://t.me/web3_0glabs)
|
||||||
Python test framework will launch blockchain fullnodes at local for storage node to interact with. There are 2 kinds of fullnodes supported:
|
- [0G Discord](https://discord.com/invite/0glabs)
|
||||||
|
|
||||||
* Conflux eSpace node (by default).
|
|
||||||
* BSC node (geth).
|
|
||||||
|
|
||||||
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
|
|
||||||
|
|
||||||
Alternatively, you could also manually copy specific version binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
|
|
||||||
|
|
||||||
For testing, it's also dependent on the following repos:
|
|
||||||
|
|
||||||
* [0G Storage Contract](https://github.com/0glabs/0g-storage-contracts): It essentially provides two abi interfaces for 0G Storage Node to interact with the on-chain contracts.
|
|
||||||
* ZgsFlow: It contains apis to submit chunk data.
|
|
||||||
* PoraMine: It contains apis to submit PoRA answers.
|
|
||||||
* [0G Storage Client](https://github.com/0glabs/0g-storage-client): It is used to interact with certain 0G Storage Nodes to upload/download files.
|
|
||||||
|
|
||||||
### Run Tests
|
|
||||||
|
|
||||||
Go to the `tests` folder and run the following command to run all tests:
|
|
||||||
|
|
||||||
```
|
|
||||||
python test_all.py
|
|
||||||
```
|
|
||||||
|
|
||||||
or, run any single test, e.g.
|
|
||||||
|
|
||||||
```
|
|
||||||
python sync_test.py
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
To make contributions to the project, please follow the guidelines [here](contributing.md).
|
|
||||||
|
@ -112,7 +112,12 @@ impl ClientBuilder {
|
|||||||
pub fn with_rocksdb_store(mut self, config: &StorageConfig) -> Result<Self, String> {
|
pub fn with_rocksdb_store(mut self, config: &StorageConfig) -> Result<Self, String> {
|
||||||
let executor = require!("sync", self, runtime_context).clone().executor;
|
let executor = require!("sync", self, runtime_context).clone().executor;
|
||||||
let store = Arc::new(
|
let store = Arc::new(
|
||||||
LogManager::rocksdb(config.log_config.clone(), &config.db_dir, executor)
|
LogManager::rocksdb(
|
||||||
|
config.log_config.clone(),
|
||||||
|
config.db_dir.join("flow_db"),
|
||||||
|
config.db_dir.join("data_db"),
|
||||||
|
executor,
|
||||||
|
)
|
||||||
.map_err(|e| format!("Unable to start RocksDB store: {:?}", e))?,
|
.map_err(|e| format!("Unable to start RocksDB store: {:?}", e))?,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
@ -25,7 +25,12 @@ fn write_performance(c: &mut Criterion) {
|
|||||||
let executor = runtime.task_executor.clone();
|
let executor = runtime.task_executor.clone();
|
||||||
|
|
||||||
let store: Arc<RwLock<dyn Store>> = Arc::new(RwLock::new(
|
let store: Arc<RwLock<dyn Store>> = Arc::new(RwLock::new(
|
||||||
LogManager::rocksdb(LogConfig::default(), "db_write", executor)
|
LogManager::rocksdb(
|
||||||
|
LogConfig::default(),
|
||||||
|
"db_flow_write",
|
||||||
|
"db_data_write",
|
||||||
|
executor,
|
||||||
|
)
|
||||||
.map_err(|e| format!("Unable to start RocksDB store: {:?}", e))
|
.map_err(|e| format!("Unable to start RocksDB store: {:?}", e))
|
||||||
.unwrap(),
|
.unwrap(),
|
||||||
));
|
));
|
||||||
@ -114,7 +119,12 @@ fn read_performance(c: &mut Criterion) {
|
|||||||
let executor = runtime.task_executor.clone();
|
let executor = runtime.task_executor.clone();
|
||||||
|
|
||||||
let store: Arc<RwLock<dyn Store>> = Arc::new(RwLock::new(
|
let store: Arc<RwLock<dyn Store>> = Arc::new(RwLock::new(
|
||||||
LogManager::rocksdb(LogConfig::default(), "db_read", executor)
|
LogManager::rocksdb(
|
||||||
|
LogConfig::default(),
|
||||||
|
"db_flow_read",
|
||||||
|
"db_data_read",
|
||||||
|
executor,
|
||||||
|
)
|
||||||
.map_err(|e| format!("Unable to start RocksDB store: {:?}", e))
|
.map_err(|e| format!("Unable to start RocksDB store: {:?}", e))
|
||||||
.unwrap(),
|
.unwrap(),
|
||||||
));
|
));
|
||||||
|
@ -63,22 +63,22 @@ impl<T: ?Sized + Configurable> ConfigurableExt for T {}
|
|||||||
|
|
||||||
impl Configurable for LogManager {
|
impl Configurable for LogManager {
|
||||||
fn get_config(&self, key: &[u8]) -> Result<Option<Vec<u8>>> {
|
fn get_config(&self, key: &[u8]) -> Result<Option<Vec<u8>>> {
|
||||||
Ok(self.db.get(COL_MISC, key)?)
|
Ok(self.flow_db.get(COL_MISC, key)?)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_config(&self, key: &[u8], value: &[u8]) -> Result<()> {
|
fn set_config(&self, key: &[u8], value: &[u8]) -> Result<()> {
|
||||||
self.db.put(COL_MISC, key, value)?;
|
self.flow_db.put(COL_MISC, key, value)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn remove_config(&self, key: &[u8]) -> Result<()> {
|
fn remove_config(&self, key: &[u8]) -> Result<()> {
|
||||||
Ok(self.db.delete(COL_MISC, key)?)
|
Ok(self.flow_db.delete(COL_MISC, key)?)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn exec_configs(&self, tx: ConfigTx) -> Result<()> {
|
fn exec_configs(&self, tx: ConfigTx) -> Result<()> {
|
||||||
let mut db_tx = self.db.transaction();
|
let mut db_tx = self.flow_db.transaction();
|
||||||
db_tx.ops = tx.ops;
|
db_tx.ops = tx.ops;
|
||||||
self.db.write(db_tx)?;
|
self.flow_db.write(db_tx)?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
@ -25,15 +25,15 @@ use tracing::{debug, error, trace};
|
|||||||
use zgs_spec::{BYTES_PER_SECTOR, SEALS_PER_LOAD, SECTORS_PER_LOAD, SECTORS_PER_SEAL};
|
use zgs_spec::{BYTES_PER_SECTOR, SEALS_PER_LOAD, SECTORS_PER_LOAD, SECTORS_PER_SEAL};
|
||||||
|
|
||||||
pub struct FlowStore {
|
pub struct FlowStore {
|
||||||
db: Arc<FlowDBStore>,
|
data_db: Arc<FlowDBStore>,
|
||||||
seal_manager: SealTaskManager,
|
seal_manager: SealTaskManager,
|
||||||
config: FlowConfig,
|
config: FlowConfig,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl FlowStore {
|
impl FlowStore {
|
||||||
pub fn new(db: Arc<FlowDBStore>, config: FlowConfig) -> Self {
|
pub fn new(data_db: Arc<FlowDBStore>, config: FlowConfig) -> Self {
|
||||||
Self {
|
Self {
|
||||||
db,
|
data_db,
|
||||||
seal_manager: Default::default(),
|
seal_manager: Default::default(),
|
||||||
config,
|
config,
|
||||||
}
|
}
|
||||||
@ -45,18 +45,19 @@ impl FlowStore {
|
|||||||
subtree_list: Vec<(usize, usize, DataRoot)>,
|
subtree_list: Vec<(usize, usize, DataRoot)>,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
let mut batch = self
|
let mut batch = self
|
||||||
.db
|
.data_db
|
||||||
.get_entry_batch(batch_index as u64)?
|
.get_entry_batch(batch_index as u64)?
|
||||||
.unwrap_or_else(|| EntryBatch::new(batch_index as u64));
|
.unwrap_or_else(|| EntryBatch::new(batch_index as u64));
|
||||||
batch.set_subtree_list(subtree_list);
|
batch.set_subtree_list(subtree_list);
|
||||||
self.db.put_entry_raw(vec![(batch_index as u64, batch)])?;
|
self.data_db
|
||||||
|
.put_entry_raw(vec![(batch_index as u64, batch)])?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn gen_proof_in_batch(&self, batch_index: usize, sector_index: usize) -> Result<FlowProof> {
|
pub fn gen_proof_in_batch(&self, batch_index: usize, sector_index: usize) -> Result<FlowProof> {
|
||||||
let batch = self
|
let batch = self
|
||||||
.db
|
.data_db
|
||||||
.get_entry_batch(batch_index as u64)?
|
.get_entry_batch(batch_index as u64)?
|
||||||
.ok_or_else(|| anyhow!("batch missing, index={}", batch_index))?;
|
.ok_or_else(|| anyhow!("batch missing, index={}", batch_index))?;
|
||||||
let merkle = batch.to_merkle_tree(batch_index == 0)?.ok_or_else(|| {
|
let merkle = batch.to_merkle_tree(batch_index == 0)?.ok_or_else(|| {
|
||||||
@ -70,7 +71,7 @@ impl FlowStore {
|
|||||||
|
|
||||||
pub fn delete_batch_list(&self, batch_list: &[u64]) -> Result<()> {
|
pub fn delete_batch_list(&self, batch_list: &[u64]) -> Result<()> {
|
||||||
self.seal_manager.delete_batch_list(batch_list);
|
self.seal_manager.delete_batch_list(batch_list);
|
||||||
self.db.delete_batch_list(batch_list)
|
self.data_db.delete_batch_list(batch_list)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -116,7 +117,7 @@ impl FlowRead for FlowStore {
|
|||||||
length -= 1;
|
length -= 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
let entry_batch = try_option!(self.db.get_entry_batch(chunk_index)?);
|
let entry_batch = try_option!(self.data_db.get_entry_batch(chunk_index)?);
|
||||||
let mut entry_batch_data =
|
let mut entry_batch_data =
|
||||||
try_option!(entry_batch.get_unsealed_data(offset as usize, length as usize));
|
try_option!(entry_batch.get_unsealed_data(offset as usize, length as usize));
|
||||||
data.append(&mut entry_batch_data);
|
data.append(&mut entry_batch_data);
|
||||||
@ -145,7 +146,7 @@ impl FlowRead for FlowStore {
|
|||||||
let chunk_index = start_entry_index / self.config.batch_size as u64;
|
let chunk_index = start_entry_index / self.config.batch_size as u64;
|
||||||
|
|
||||||
if let Some(mut data_list) = self
|
if let Some(mut data_list) = self
|
||||||
.db
|
.data_db
|
||||||
.get_entry_batch(chunk_index)?
|
.get_entry_batch(chunk_index)?
|
||||||
.map(|b| b.into_data_list(start_entry_index))
|
.map(|b| b.into_data_list(start_entry_index))
|
||||||
{
|
{
|
||||||
@ -170,7 +171,7 @@ impl FlowRead for FlowStore {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn load_sealed_data(&self, chunk_index: u64) -> Result<Option<MineLoadChunk>> {
|
fn load_sealed_data(&self, chunk_index: u64) -> Result<Option<MineLoadChunk>> {
|
||||||
let batch = try_option!(self.db.get_entry_batch(chunk_index)?);
|
let batch = try_option!(self.data_db.get_entry_batch(chunk_index)?);
|
||||||
let mut mine_chunk = MineLoadChunk::default();
|
let mut mine_chunk = MineLoadChunk::default();
|
||||||
for (seal_index, (sealed, validity)) in mine_chunk
|
for (seal_index, (sealed, validity)) in mine_chunk
|
||||||
.loaded_chunk
|
.loaded_chunk
|
||||||
@ -188,7 +189,7 @@ impl FlowRead for FlowStore {
|
|||||||
|
|
||||||
fn get_num_entries(&self) -> Result<u64> {
|
fn get_num_entries(&self) -> Result<u64> {
|
||||||
// This is an over-estimation as it assumes each batch is full.
|
// This is an over-estimation as it assumes each batch is full.
|
||||||
self.db
|
self.data_db
|
||||||
.kvdb
|
.kvdb
|
||||||
.num_keys(COL_ENTRY_BATCH)
|
.num_keys(COL_ENTRY_BATCH)
|
||||||
.map(|num_batches| num_batches * PORA_CHUNK_SIZE as u64)
|
.map(|num_batches| num_batches * PORA_CHUNK_SIZE as u64)
|
||||||
@ -228,7 +229,7 @@ impl FlowWrite for FlowStore {
|
|||||||
|
|
||||||
// TODO: Try to avoid loading from db if possible.
|
// TODO: Try to avoid loading from db if possible.
|
||||||
let mut batch = self
|
let mut batch = self
|
||||||
.db
|
.data_db
|
||||||
.get_entry_batch(chunk_index)?
|
.get_entry_batch(chunk_index)?
|
||||||
.unwrap_or_else(|| EntryBatch::new(chunk_index));
|
.unwrap_or_else(|| EntryBatch::new(chunk_index));
|
||||||
let completed_seals = batch.insert_data(
|
let completed_seals = batch.insert_data(
|
||||||
@ -246,12 +247,12 @@ impl FlowWrite for FlowStore {
|
|||||||
|
|
||||||
batch_list.push((chunk_index, batch));
|
batch_list.push((chunk_index, batch));
|
||||||
}
|
}
|
||||||
self.db.put_entry_batch_list(batch_list)
|
self.data_db.put_entry_batch_list(batch_list)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn truncate(&self, start_index: u64) -> crate::error::Result<()> {
|
fn truncate(&self, start_index: u64) -> crate::error::Result<()> {
|
||||||
let mut to_seal_set = self.seal_manager.to_seal_set.write();
|
let mut to_seal_set = self.seal_manager.to_seal_set.write();
|
||||||
let to_reseal = self.db.truncate(start_index, self.config.batch_size)?;
|
let to_reseal = self.data_db.truncate(start_index, self.config.batch_size)?;
|
||||||
|
|
||||||
to_seal_set.split_off(&(start_index as usize / SECTORS_PER_SEAL));
|
to_seal_set.split_off(&(start_index as usize / SECTORS_PER_SEAL));
|
||||||
let new_seal_version = self.seal_manager.inc_seal_version();
|
let new_seal_version = self.seal_manager.inc_seal_version();
|
||||||
@ -281,7 +282,7 @@ impl FlowSeal for FlowStore {
|
|||||||
let mut tasks = Vec::with_capacity(SEALS_PER_LOAD);
|
let mut tasks = Vec::with_capacity(SEALS_PER_LOAD);
|
||||||
|
|
||||||
let batch_data = self
|
let batch_data = self
|
||||||
.db
|
.data_db
|
||||||
.get_entry_batch((first_index / SEALS_PER_LOAD) as u64)?
|
.get_entry_batch((first_index / SEALS_PER_LOAD) as u64)?
|
||||||
.expect("Lost data chunk in to_seal_set");
|
.expect("Lost data chunk in to_seal_set");
|
||||||
|
|
||||||
@ -320,7 +321,7 @@ impl FlowSeal for FlowStore {
|
|||||||
.chunk_by(|answer| answer.seal_index / SEALS_PER_LOAD as u64)
|
.chunk_by(|answer| answer.seal_index / SEALS_PER_LOAD as u64)
|
||||||
{
|
{
|
||||||
let mut batch_chunk = self
|
let mut batch_chunk = self
|
||||||
.db
|
.data_db
|
||||||
.get_entry_batch(load_index)?
|
.get_entry_batch(load_index)?
|
||||||
.expect("Can not find chunk data");
|
.expect("Can not find chunk data");
|
||||||
for answer in answers_in_chunk {
|
for answer in answers_in_chunk {
|
||||||
@ -336,7 +337,7 @@ impl FlowSeal for FlowStore {
|
|||||||
to_seal_set.remove(&idx);
|
to_seal_set.remove(&idx);
|
||||||
}
|
}
|
||||||
|
|
||||||
self.db.put_entry_raw(updated_chunk)?;
|
self.data_db.put_entry_raw(updated_chunk)?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
@ -61,7 +61,7 @@ pub struct UpdateFlowMessage {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub struct LogManager {
|
pub struct LogManager {
|
||||||
pub(crate) db: Arc<dyn ZgsKeyValueDB>,
|
pub(crate) flow_db: Arc<dyn ZgsKeyValueDB>,
|
||||||
tx_store: TransactionStore,
|
tx_store: TransactionStore,
|
||||||
flow_store: Arc<FlowStore>,
|
flow_store: Arc<FlowStore>,
|
||||||
merkle: RwLock<MerkleManager>,
|
merkle: RwLock<MerkleManager>,
|
||||||
@ -612,28 +612,33 @@ impl LogStoreRead for LogManager {
|
|||||||
impl LogManager {
|
impl LogManager {
|
||||||
pub fn rocksdb(
|
pub fn rocksdb(
|
||||||
config: LogConfig,
|
config: LogConfig,
|
||||||
path: impl AsRef<Path>,
|
flow_path: impl AsRef<Path>,
|
||||||
|
data_path: impl AsRef<Path>,
|
||||||
executor: task_executor::TaskExecutor,
|
executor: task_executor::TaskExecutor,
|
||||||
) -> Result<Self> {
|
) -> Result<Self> {
|
||||||
let mut db_config = DatabaseConfig::with_columns(COL_NUM);
|
let mut db_config = DatabaseConfig::with_columns(COL_NUM);
|
||||||
db_config.enable_statistics = true;
|
db_config.enable_statistics = true;
|
||||||
let db = Arc::new(Database::open(&db_config, path)?);
|
let flow_db_source = Arc::new(Database::open(&db_config, flow_path)?);
|
||||||
Self::new(db, config, executor)
|
let data_db_source = Arc::new(Database::open(&db_config, data_path)?);
|
||||||
|
Self::new(flow_db_source, data_db_source, config, executor)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn memorydb(config: LogConfig, executor: task_executor::TaskExecutor) -> Result<Self> {
|
pub fn memorydb(config: LogConfig, executor: task_executor::TaskExecutor) -> Result<Self> {
|
||||||
let db = Arc::new(kvdb_memorydb::create(COL_NUM));
|
let flow_db = Arc::new(kvdb_memorydb::create(COL_NUM));
|
||||||
Self::new(db, config, executor)
|
let data_db = Arc::new(kvdb_memorydb::create(COL_NUM));
|
||||||
|
Self::new(flow_db, data_db, config, executor)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn new(
|
fn new(
|
||||||
db: Arc<dyn ZgsKeyValueDB>,
|
flow_db_source: Arc<dyn ZgsKeyValueDB>,
|
||||||
|
data_db_source: Arc<dyn ZgsKeyValueDB>,
|
||||||
config: LogConfig,
|
config: LogConfig,
|
||||||
executor: task_executor::TaskExecutor,
|
executor: task_executor::TaskExecutor,
|
||||||
) -> Result<Self> {
|
) -> Result<Self> {
|
||||||
let tx_store = TransactionStore::new(db.clone())?;
|
let tx_store = TransactionStore::new(flow_db_source.clone())?;
|
||||||
let flow_db = Arc::new(FlowDBStore::new(db.clone()));
|
let flow_db = Arc::new(FlowDBStore::new(flow_db_source.clone()));
|
||||||
let flow_store = Arc::new(FlowStore::new(flow_db.clone(), config.flow.clone()));
|
let data_db = Arc::new(FlowDBStore::new(data_db_source.clone()));
|
||||||
|
let flow_store = Arc::new(FlowStore::new(data_db.clone(), config.flow.clone()));
|
||||||
// If the last tx `put_tx` does not complete, we will revert it in `pora_chunks_merkle`
|
// If the last tx `put_tx` does not complete, we will revert it in `pora_chunks_merkle`
|
||||||
// first and call `put_tx` later.
|
// first and call `put_tx` later.
|
||||||
let next_tx_seq = tx_store.next_tx_seq();
|
let next_tx_seq = tx_store.next_tx_seq();
|
||||||
@ -737,7 +742,7 @@ impl LogManager {
|
|||||||
let (sender, receiver) = mpsc::channel();
|
let (sender, receiver) = mpsc::channel();
|
||||||
|
|
||||||
let mut log_manager = Self {
|
let mut log_manager = Self {
|
||||||
db,
|
flow_db: flow_db_source,
|
||||||
tx_store,
|
tx_store,
|
||||||
flow_store,
|
flow_store,
|
||||||
merkle,
|
merkle,
|
||||||
|
@ -1 +0,0 @@
|
|||||||
enr:-Ly4QJZwz9htAorBIx_otqoaRFPohX7NQJ31iBB6mcEhBiuPWsOnigc1ABQsg6tLU1OirQdLR6aEvv8SlkkfIbV72T8CgmlkgnY0gmlwhH8AAAGQbmV0d29ya19pZGVudGl0eZ8oIwAAAAAAADPyz8cpvYcPpUtQMmYOBrTPKn-UAAIAiXNlY3AyNTZrMaEDeDdgnDgLPkxNxB39jKb9f1Na30t6R9vVolpTk5zu-hODdGNwgir4g3VkcIIq-A
|
|
@ -1 +0,0 @@
|
|||||||
Y<13><><02><><EFBFBD>Ң<>-
<0A><>r<>7<EFBFBD><37>jq<6A>p<>}<7D>
|
|
Loading…
Reference in New Issue
Block a user