Compare commits

...

3 Commits

Author SHA1 Message Date
Artem
15b04aae9b
Merge 112e083ace into 898350e271 2025-02-18 09:47:41 +01:00
Eric Norberg
898350e271
fix: errors in code comments (#333)
Some checks failed
abi-consistent-check / build-and-compare (push) Has been cancelled
code-coverage / unittest-cov (push) Has been cancelled
rust / check (push) Has been cancelled
rust / test (push) Has been cancelled
rust / lints (push) Has been cancelled
functional-test / test (push) Has been cancelled
* lib.rs

* architecture.md

* chunk_write_control.rs
2025-02-18 16:47:01 +08:00
Artem
112e083ace
adddefault_executor.rs 2025-02-08 12:31:18 +02:00
4 changed files with 78 additions and 4 deletions

View File

@ -11,7 +11,7 @@ pub fn unused_tcp_port() -> Result<u16, String> {
unused_port(Transport::Tcp)
}
/// A convenience function for `unused_port(Transport::Tcp)`.
/// A convenience function for `unused_port(Transport::Udp)`.
pub fn unused_udp_port() -> Result<u16, String> {
unused_port(Transport::Udp)
}

View File

@ -4,7 +4,7 @@
ZeroGravity system consists of a data availability layer (0G DA) on top of a decentralized storage system (0G Storage). There is a separate consensus network that is part of both the 0G DA and the 0G Storage. For 0G Storage, the consensus is responsible for determining the ordering of the uploaded data blocks, realizing the storage mining verification and the corresponding incentive mechanism through smart contracts.
Figure 1 illustrates the architecture of the 0G system. When a data block enters the 0G DA, it is first erasure coded and organized into multiple consecutive chunks through erasure coding. The merkle root as a commitment of the encoded data block is then submitted to the consensus layer to keep the order of the data entering the system. The chunks are then dispersed to different storage nodes in 0G Storage where the data may be further replicated to other nodes depending on the storage fee that the user pays. The storage nodes periodically participate the mining process by interacting with the consensus network to accrue rewards from the system.
Figure 1 illustrates the architecture of the 0G system. When a data block enters the 0G DA, it is first erasure coded and organized into multiple consecutive chunks through erasure coding. The merkle root as a commitment of the encoded data block is then submitted to the consensus layer to keep the order of the data entering the system. The chunks are then dispersed to different storage nodes in 0G Storage where the data may be further replicated to other nodes depending on the storage fee that the user pays. The storage nodes periodically participate in the mining process by interacting with the consensus network to accrue rewards from the system.
<figure><img src="../.gitbook/assets/zg-storage-architecture.png" alt=""><figcaption><p>Figure 1. The Architecture of 0G System</p></figcaption></figure>

View File

@ -13,7 +13,7 @@ enum SlotStatus {
}
/// Sliding window is used to control the concurrent uploading process of a file.
/// Bounded window allows segments to be uploaded concurrenly, while having a capacity
/// Bounded window allows segments to be uploaded concurrently, while having a capacity
/// limit on writing threads per file. Meanwhile, the left_boundary field records
/// how many segments have been uploaded.
struct CtrlWindow {
@ -165,7 +165,7 @@ impl ChunkPoolWriteCtrl {
if file_ctrl.total_segments != total_segments {
bail!(
"file size in segment doesn't match with file size declared in previous segment. Previous total segments:{}, current total segments:{}s",
"file size in segment doesn't match with file size declared in previous segment. Previous total segments:{}, current total segments:{}",
file_ctrl.total_segments,
total_segments
);

View File

@ -0,0 +1,74 @@
//! Demonstrates how to run a basic Discovery v5 Service with the default Tokio executor.
//!
//! Discv5 requires a Tokio executor with all features. If none is passed, it will use the current
//! runtime that built the `Discv5` struct.
//!
//! To run this example simply run:
//! ```
//! $ cargo run --example default_executor <BASE64ENR>
//! ```
use discv5::{enr, enr::CombinedKey, Discv5, Discv5ConfigBuilder, Discv5Event};
use std::net::SocketAddr;
use tokio::runtime::Runtime;
#[tokio::main]
async fn main() {
// allows detailed logging with the RUST_LOG env variable
let filter_layer = tracing_subscriber::EnvFilter::try_from_default_env()
.or_else(|_| tracing_subscriber::EnvFilter::try_new("info"))
.unwrap();
let _ = tracing_subscriber::fmt()
.with_env_filter(filter_layer)
.try_init();
// listening address and port
let listen_addr = "0.0.0.0:9000".parse::<SocketAddr>().unwrap();
let enr_key = CombinedKey::generate_secp256k1();
// construct a local ENR
let enr = enr::EnrBuilder::new("v4").build(&enr_key).unwrap();
// default configuration - uses the current executor
let config = Discv5ConfigBuilder::new().build();
// construct the discv5 server
let mut discv5 = Discv5::new(enr, enr_key, config).unwrap();
// if we know of another peer's ENR, add it known peers
if let Some(base64_enr) = std::env::args().nth(1) {
match base64_enr.parse::<enr::Enr<enr::CombinedKey>>() {
Ok(enr) => {
println!(
"ENR Read. ip: {:?}, udp_port {:?}, tcp_port: {:?}",
enr.ip(),
enr.udp(),
enr.tcp()
);
if let Err(e) = discv5.add_enr(enr) {
println!("ENR was not added: {}", e);
}
}
Err(e) => panic!("Decoding ENR failed: {}", e),
}
}
// start the discv5 service
discv5.start(listen_addr).await.unwrap();
println!("Server started");
// get an event stream
let mut event_stream = discv5.event_stream().await.unwrap();
loop {
match event_stream.recv().await {
Some(Discv5Event::SocketUpdated(addr)) => {
println!("Nodes ENR socket address has been updated to: {:?}", addr);
}
Some(Discv5Event::Discovered(enr)) => {
println!("A peer has been discovered: {}", enr.node_id());
}
_ => {}
}
}
}