mirror of
https://github.com/0glabs/0g-storage-node.git
synced 2025-04-04 15:35:18 +00:00
Compare commits
3 Commits
0cd07d8903
...
e9a222cc07
Author | SHA1 | Date | |
---|---|---|---|
![]() |
e9a222cc07 | ||
![]() |
2f9960e8e7 | ||
![]() |
fd9c033176 |
1
Cargo.lock
generated
1
Cargo.lock
generated
@ -7302,6 +7302,7 @@ dependencies = [
|
|||||||
"kvdb-rocksdb",
|
"kvdb-rocksdb",
|
||||||
"merkle_light",
|
"merkle_light",
|
||||||
"merkle_tree",
|
"merkle_tree",
|
||||||
|
"once_cell",
|
||||||
"parking_lot 0.12.3",
|
"parking_lot 0.12.3",
|
||||||
"rand 0.8.5",
|
"rand 0.8.5",
|
||||||
"rayon",
|
"rayon",
|
||||||
|
73
README.md
73
README.md
@ -2,69 +2,32 @@
|
|||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
0G Storage is the storage layer for the ZeroGravity data availability (DA) system. The 0G Storage layer holds three important features:
|
0G Storage is a decentralized data storage system designed to address the challenges of high-throughput and low-latency data storage and retrieval, in areas such as AI and gaming.
|
||||||
|
|
||||||
* Buit-in - It is natively built into the ZeroGravity DA system for data storage and retrieval.
|
In addition, it forms the storage layer for the 0G data availability (DA) system, with the cross-layer integration abstracted away from Rollup and AppChain builders.
|
||||||
* General purpose - It is designed to support atomic transactions, mutable kv stores as well as archive log systems to enable wide range of applications with various data types.
|
|
||||||
* Incentive - Instead of being just a decentralized database, 0G Storage introduces PoRA mining algorithm to incentivize storage network participants.
|
|
||||||
|
|
||||||
To dive deep into the technical details, continue reading [0G Storage Spec.](docs/)
|
## System Architecture
|
||||||
|
|
||||||
## Integration
|
0G Storage consists of two main components:
|
||||||
|
|
||||||
We provide a [SDK](https://github.com/0glabs/0g-js-storage-sdk) for users to easily integrate 0G Storage in their applications with the following features:
|
1. **Data Publishing Lane**: Ensures quick data availability and verification through the 0G Consensus network.
|
||||||
|
2. **Data Storage Lane**: Manages large data transfers and storage using an erasure-coding mechanism for redundancy and reliability.
|
||||||
|
|
||||||
* File Merkle Tree Class
|
Across the two lanes, 0G Storage supports the following features:
|
||||||
* Flow Contract Types
|
|
||||||
* RPC methods support
|
|
||||||
* File upload
|
|
||||||
* Support browser environment
|
|
||||||
* Tests for different environments (In Progress)
|
|
||||||
* File download (In Progress)
|
|
||||||
|
|
||||||
## Deployment
|
* **General Purpose Design**: Supports atomic transactions, mutable key-value stores, and archive log systems, enabling a wide range of applications with various data types.
|
||||||
|
* **Incentivized Participation**: Utilizes the PoRA (Proof of Random Access) mining algorithm to incentivize storage network participants.
|
||||||
|
|
||||||
Please refer to [Deployment](docs/run.md) page for detailed steps to compile and start a 0G Storage node.
|
For in-depth technical details about 0G Storage, please read our [Intro to 0G Storage](https://docs.0g.ai/og-storage).
|
||||||
|
|
||||||
## Test
|
## Documentation
|
||||||
|
|
||||||
### Prerequisites
|
- If you want to run a node, please refer to the [Running a Node](https://docs.0g.ai/run-a-node/storage-node) guide.
|
||||||
|
- If you want build a project using 0G storage, please refer to the [0G Storage SDK](https://docs.0g.ai/build-with-0g/storage-sdk) guide.
|
||||||
|
|
||||||
* Required python version: 3.8, 3.9, 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`).
|
## Support and Additional Resources
|
||||||
* Install dependencies under root folder: `pip3 install -r requirements.txt`
|
We want to do everything we can to help you be successful while working on your contribution and projects. Here you'll find various resources and communities that may help you complete a project or contribute to 0G.
|
||||||
|
|
||||||
### Dependencies
|
### Communities
|
||||||
|
- [0G Telegram](https://t.me/web3_0glabs)
|
||||||
Python test framework will launch blockchain fullnodes at local for storage node to interact with. There are 2 kinds of fullnodes supported:
|
- [0G Discord](https://discord.com/invite/0glabs)
|
||||||
|
|
||||||
* Conflux eSpace node (by default).
|
|
||||||
* BSC node (geth).
|
|
||||||
|
|
||||||
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
|
|
||||||
|
|
||||||
Alternatively, you could also manually copy specific version binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
|
|
||||||
|
|
||||||
For testing, it's also dependent on the following repos:
|
|
||||||
|
|
||||||
* [0G Storage Contract](https://github.com/0glabs/0g-storage-contracts): It essentially provides two abi interfaces for 0G Storage Node to interact with the on-chain contracts.
|
|
||||||
* ZgsFlow: It contains apis to submit chunk data.
|
|
||||||
* PoraMine: It contains apis to submit PoRA answers.
|
|
||||||
* [0G Storage Client](https://github.com/0glabs/0g-storage-client): It is used to interact with certain 0G Storage Nodes to upload/download files.
|
|
||||||
|
|
||||||
### Run Tests
|
|
||||||
|
|
||||||
Go to the `tests` folder and run the following command to run all tests:
|
|
||||||
|
|
||||||
```
|
|
||||||
python test_all.py
|
|
||||||
```
|
|
||||||
|
|
||||||
or, run any single test, e.g.
|
|
||||||
|
|
||||||
```
|
|
||||||
python sync_test.py
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
To make contributions to the project, please follow the guidelines [here](contributing.md).
|
|
||||||
|
@ -31,6 +31,7 @@ parking_lot = "0.12.3"
|
|||||||
serde_json = "1.0.127"
|
serde_json = "1.0.127"
|
||||||
tokio = { version = "1.38.0", features = ["full"] }
|
tokio = { version = "1.38.0", features = ["full"] }
|
||||||
task_executor = { path = "../../common/task_executor" }
|
task_executor = { path = "../../common/task_executor" }
|
||||||
|
once_cell = { version = "1.19.0", features = [] }
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
rand = "0.8.5"
|
rand = "0.8.5"
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
use super::tx_store::BlockHashAndSubmissionIndex;
|
||||||
|
use super::{FlowSeal, MineLoadChunk, SealAnswer, SealTask};
|
||||||
use crate::config::ShardConfig;
|
use crate::config::ShardConfig;
|
||||||
use crate::log_store::flow_store::{batch_iter_sharded, FlowConfig, FlowDBStore, FlowStore};
|
use crate::log_store::flow_store::{batch_iter_sharded, FlowConfig, FlowDBStore, FlowStore};
|
||||||
use crate::log_store::tx_store::TransactionStore;
|
use crate::log_store::tx_store::TransactionStore;
|
||||||
@ -11,6 +13,7 @@ use ethereum_types::H256;
|
|||||||
use kvdb_rocksdb::{Database, DatabaseConfig};
|
use kvdb_rocksdb::{Database, DatabaseConfig};
|
||||||
use merkle_light::merkle::{log2_pow2, MerkleTree};
|
use merkle_light::merkle::{log2_pow2, MerkleTree};
|
||||||
use merkle_tree::RawLeafSha3Algorithm;
|
use merkle_tree::RawLeafSha3Algorithm;
|
||||||
|
use once_cell::sync::Lazy;
|
||||||
use parking_lot::RwLock;
|
use parking_lot::RwLock;
|
||||||
use rayon::iter::ParallelIterator;
|
use rayon::iter::ParallelIterator;
|
||||||
use rayon::prelude::ParallelSlice;
|
use rayon::prelude::ParallelSlice;
|
||||||
@ -25,9 +28,6 @@ use std::sync::mpsc;
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use tracing::{debug, error, info, instrument, trace, warn};
|
use tracing::{debug, error, info, instrument, trace, warn};
|
||||||
|
|
||||||
use super::tx_store::BlockHashAndSubmissionIndex;
|
|
||||||
use super::{FlowSeal, MineLoadChunk, SealAnswer, SealTask};
|
|
||||||
|
|
||||||
/// 256 Bytes
|
/// 256 Bytes
|
||||||
pub const ENTRY_SIZE: usize = 256;
|
pub const ENTRY_SIZE: usize = 256;
|
||||||
/// 1024 Entries.
|
/// 1024 Entries.
|
||||||
@ -47,6 +47,14 @@ pub const COL_NUM: u32 = 9;
|
|||||||
// Process at most 1M entries (256MB) pad data at a time.
|
// Process at most 1M entries (256MB) pad data at a time.
|
||||||
const PAD_MAX_SIZE: usize = 1 << 20;
|
const PAD_MAX_SIZE: usize = 1 << 20;
|
||||||
|
|
||||||
|
static PAD_SEGMENT_ROOT: Lazy<H256> = Lazy::new(|| {
|
||||||
|
Merkle::new(
|
||||||
|
data_to_merkle_leaves(&[0; ENTRY_SIZE * PORA_CHUNK_SIZE]).unwrap(),
|
||||||
|
0,
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
.root()
|
||||||
|
});
|
||||||
pub struct UpdateFlowMessage {
|
pub struct UpdateFlowMessage {
|
||||||
pub root_map: BTreeMap<usize, (H256, usize)>,
|
pub root_map: BTreeMap<usize, (H256, usize)>,
|
||||||
pub pad_data: usize,
|
pub pad_data: usize,
|
||||||
@ -967,12 +975,11 @@ impl LogManager {
|
|||||||
// Pad with more complete chunks.
|
// Pad with more complete chunks.
|
||||||
let mut start_index = last_chunk_pad / ENTRY_SIZE;
|
let mut start_index = last_chunk_pad / ENTRY_SIZE;
|
||||||
while pad_data.len() >= (start_index + PORA_CHUNK_SIZE) * ENTRY_SIZE {
|
while pad_data.len() >= (start_index + PORA_CHUNK_SIZE) * ENTRY_SIZE {
|
||||||
let data = pad_data[start_index * ENTRY_SIZE
|
merkle.pora_chunks_merkle.append(*PAD_SEGMENT_ROOT);
|
||||||
..(start_index + PORA_CHUNK_SIZE) * ENTRY_SIZE]
|
root_map.insert(
|
||||||
.to_vec();
|
merkle.pora_chunks_merkle.leaves() - 1,
|
||||||
let root = Merkle::new(data_to_merkle_leaves(&data)?, 0, None).root();
|
(*PAD_SEGMENT_ROOT, 1),
|
||||||
merkle.pora_chunks_merkle.append(root);
|
);
|
||||||
root_map.insert(merkle.pora_chunks_merkle.leaves() - 1, (root, 1));
|
|
||||||
start_index += PORA_CHUNK_SIZE;
|
start_index += PORA_CHUNK_SIZE;
|
||||||
}
|
}
|
||||||
assert_eq!(pad_data.len(), start_index * ENTRY_SIZE);
|
assert_eq!(pad_data.len(), start_index * ENTRY_SIZE);
|
||||||
|
Loading…
Reference in New Issue
Block a user