Compare commits

...

3 Commits

Author SHA1 Message Date
0xroy
6c8bce6939
Merge fd9c033176 into 0c493880ee 2024-11-09 13:37:38 +08:00
Joel Liu
0c493880ee
add timeout for rpc connections (#263)
Some checks failed
abi-consistent-check / build-and-compare (push) Has been cancelled
code-coverage / unittest-cov (push) Has been cancelled
rust / check (push) Has been cancelled
rust / test (push) Has been cancelled
rust / lints (push) Has been cancelled
functional-test / test (push) Has been cancelled
2024-11-09 13:37:09 +08:00
Roy Lu
fd9c033176 Updated README 2024-10-23 08:52:56 -07:00
8 changed files with 51 additions and 83 deletions

2
Cargo.lock generated
View File

@ -4652,12 +4652,14 @@ dependencies = [
"jsonrpsee",
"lazy_static",
"metrics",
"reqwest",
"serde_json",
"shared_types",
"storage",
"task_executor",
"thiserror",
"tokio",
"url",
]
[[package]]

View File

@ -2,69 +2,32 @@
## Overview
0G Storage is the storage layer for the ZeroGravity data availability (DA) system. The 0G Storage layer holds three important features:
0G Storage is a decentralized data storage system designed to address the challenges of high-throughput and low-latency data storage and retrieval, in areas such as AI and gaming.
* Buit-in - It is natively built into the ZeroGravity DA system for data storage and retrieval.
* General purpose - It is designed to support atomic transactions, mutable kv stores as well as archive log systems to enable wide range of applications with various data types.
* Incentive - Instead of being just a decentralized database, 0G Storage introduces PoRA mining algorithm to incentivize storage network participants.
In addition, it forms the storage layer for the 0G data availability (DA) system, with the cross-layer integration abstracted away from Rollup and AppChain builders.
To dive deep into the technical details, continue reading [0G Storage Spec.](docs/)
## System Architecture
## Integration
0G Storage consists of two main components:
We provide a [SDK](https://github.com/0glabs/0g-js-storage-sdk) for users to easily integrate 0G Storage in their applications with the following features:
1. **Data Publishing Lane**: Ensures quick data availability and verification through the 0G Consensus network.
2. **Data Storage Lane**: Manages large data transfers and storage using an erasure-coding mechanism for redundancy and reliability.
* File Merkle Tree Class
* Flow Contract Types
* RPC methods support
* File upload
* Support browser environment
* Tests for different environments (In Progress)
* File download (In Progress)
Across the two lanes, 0G Storage supports the following features:
## Deployment
* **General Purpose Design**: Supports atomic transactions, mutable key-value stores, and archive log systems, enabling a wide range of applications with various data types.
* **Incentivized Participation**: Utilizes the PoRA (Proof of Random Access) mining algorithm to incentivize storage network participants.
Please refer to [Deployment](docs/run.md) page for detailed steps to compile and start a 0G Storage node.
For in-depth technical details about 0G Storage, please read our [Intro to 0G Storage](https://docs.0g.ai/og-storage).
## Test
## Documentation
### Prerequisites
- If you want to run a node, please refer to the [Running a Node](https://docs.0g.ai/run-a-node/storage-node) guide.
- If you want build a project using 0G storage, please refer to the [0G Storage SDK](https://docs.0g.ai/build-with-0g/storage-sdk) guide.
* Required python version: 3.8, 3.9, 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`).
* Install dependencies under root folder: `pip3 install -r requirements.txt`
## Support and Additional Resources
We want to do everything we can to help you be successful while working on your contribution and projects. Here you'll find various resources and communities that may help you complete a project or contribute to 0G.
### Dependencies
Python test framework will launch blockchain fullnodes at local for storage node to interact with. There are 2 kinds of fullnodes supported:
* Conflux eSpace node (by default).
* BSC node (geth).
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
Alternatively, you could also manually copy specific version binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
For testing, it's also dependent on the following repos:
* [0G Storage Contract](https://github.com/0glabs/0g-storage-contracts): It essentially provides two abi interfaces for 0G Storage Node to interact with the on-chain contracts.
* ZgsFlow: It contains apis to submit chunk data.
* PoraMine: It contains apis to submit PoRA answers.
* [0G Storage Client](https://github.com/0glabs/0g-storage-client): It is used to interact with certain 0G Storage Nodes to upload/download files.
### Run Tests
Go to the `tests` folder and run the following command to run all tests:
```
python test_all.py
```
or, run any single test, e.g.
```
python sync_test.py
```
## Contributing
To make contributions to the project, please follow the guidelines [here](contributing.md).
### Communities
- [0G Telegram](https://t.me/web3_0glabs)
- [0G Discord](https://discord.com/invite/0glabs)

View File

@ -24,3 +24,5 @@ futures-util = "0.3.28"
thiserror = "1.0.44"
lazy_static = "1.4.0"
metrics = { workspace = true }
reqwest = {version = "0.11", features = ["json"]}
url = { version = "2.4", default-features = false }

View File

@ -1,3 +1,5 @@
use std::time::Duration;
use crate::ContractAddress;
pub struct LogSyncConfig {
@ -34,6 +36,9 @@ pub struct LogSyncConfig {
pub watch_loop_wait_time_ms: u64,
// force to sync log from start block number
pub force_log_sync_from_start_block_number: bool,
// the timeout for blockchain rpc connection
pub blockchain_rpc_timeout: Duration,
}
#[derive(Clone)]
@ -61,6 +66,7 @@ impl LogSyncConfig {
remove_finalized_block_interval_minutes: u64,
watch_loop_wait_time_ms: u64,
force_log_sync_from_start_block_number: bool,
blockchain_rpc_timeout: Duration,
) -> Self {
Self {
rpc_endpoint_url,
@ -77,6 +83,7 @@ impl LogSyncConfig {
remove_finalized_block_interval_minutes,
watch_loop_wait_time_ms,
force_log_sync_from_start_block_number,
blockchain_rpc_timeout,
}
}
}

View File

@ -1,6 +1,6 @@
use crate::sync_manager::log_query::LogQuery;
use crate::sync_manager::RETRY_WAIT_MS;
use crate::ContractAddress;
use crate::{ContractAddress, LogSyncConfig};
use anyhow::{anyhow, bail, Result};
use append_merkle::{Algorithm, Sha3Algorithm};
use contract_interface::{SubmissionNode, SubmitFilter, ZgsFlow};
@ -12,7 +12,6 @@ use futures::StreamExt;
use jsonrpsee::tracing::{debug, error, info, warn};
use shared_types::{DataRoot, Transaction};
use std::collections::{BTreeMap, HashMap};
use std::str::FromStr;
use std::sync::Arc;
use std::time::Duration;
use storage::log_store::{tx_store::BlockHashAndSubmissionIndex, Store};
@ -34,28 +33,29 @@ pub struct LogEntryFetcher {
}
impl LogEntryFetcher {
pub async fn new(
url: &str,
contract_address: ContractAddress,
log_page_size: u64,
confirmation_delay: u64,
rate_limit_retries: u32,
timeout_retries: u32,
initial_backoff: u64,
) -> Result<Self> {
pub async fn new(config: &LogSyncConfig) -> Result<Self> {
let provider = Arc::new(Provider::new(
RetryClientBuilder::default()
.rate_limit_retries(rate_limit_retries)
.timeout_retries(timeout_retries)
.initial_backoff(Duration::from_millis(initial_backoff))
.build(Http::from_str(url)?, Box::new(HttpRateLimitRetryPolicy)),
.rate_limit_retries(config.rate_limit_retries)
.timeout_retries(config.timeout_retries)
.initial_backoff(Duration::from_millis(config.initial_backoff))
.build(
Http::new_with_client(
url::Url::parse(&config.rpc_endpoint_url)?,
reqwest::Client::builder()
.timeout(config.blockchain_rpc_timeout)
.connect_timeout(config.blockchain_rpc_timeout)
.build()?,
),
Box::new(HttpRateLimitRetryPolicy),
),
));
// TODO: `error` types are removed from the ABI json file.
Ok(Self {
contract_address,
contract_address: config.contract_address,
provider,
log_page_size,
confirmation_delay,
log_page_size: config.log_page_size,
confirmation_delay: config.confirmation_block_count,
})
}

View File

@ -86,16 +86,7 @@ impl LogSyncManager {
.expect("shutdown send error")
},
async move {
let log_fetcher = LogEntryFetcher::new(
&config.rpc_endpoint_url,
config.contract_address,
config.log_page_size,
config.confirmation_block_count,
config.rate_limit_retries,
config.timeout_retries,
config.initial_backoff,
)
.await?;
let log_fetcher = LogEntryFetcher::new(&config).await?;
let data_cache = DataCache::new(config.cache_config.clone());
let block_hash_cache = Arc::new(RwLock::new(

View File

@ -146,6 +146,7 @@ impl ZgsConfig {
self.remove_finalized_block_interval_minutes,
self.watch_loop_wait_time_ms,
self.force_log_sync_from_start_block_number,
Duration::from_secs(self.blockchain_rpc_timeout_secs),
))
}

View File

@ -49,6 +49,8 @@ build_config! {
(remove_finalized_block_interval_minutes, (u64), 30)
(watch_loop_wait_time_ms, (u64), 500)
(blockchain_rpc_timeout_secs, (u64), 120)
// chunk pool
(chunk_pool_write_window_size, (usize), 4)
(chunk_pool_max_cached_chunks_all, (usize), 4*1024*1024) // 1G