mirror of
https://github.com/0glabs/0g-storage-node.git
synced 2024-11-20 15:05:19 +00:00
Docs improvements (#82)
* Update README.md * Update proof-of-random-access.md * Update architecture.md * Update introduction.md * Update log-system.md * Update run.md * Update transaction-processing.md * Update README.md
This commit is contained in:
parent
fa2033eb30
commit
0d2caf9b76
@ -42,7 +42,7 @@ Python test framework will launch blockchain fullnodes at local for storage node
|
|||||||
|
|
||||||
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
|
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
|
||||||
|
|
||||||
Alternatively, you could also manually copy specific versoin binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
|
Alternatively, you could also manually copy specific version binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
|
||||||
|
|
||||||
For testing, it's also dependent on the following repos:
|
For testing, it's also dependent on the following repos:
|
||||||
|
|
||||||
@ -53,7 +53,7 @@ For testing, it's also dependent on the following repos:
|
|||||||
|
|
||||||
### Run Tests
|
### Run Tests
|
||||||
|
|
||||||
Go to the `tests` folder and run following command to run all tests:
|
Go to the `tests` folder and run the following command to run all tests:
|
||||||
|
|
||||||
```
|
```
|
||||||
python test_all.py
|
python test_all.py
|
||||||
|
@ -10,11 +10,11 @@ Figure 1 illustrates the architecture of the 0G system. When a data block enters
|
|||||||
|
|
||||||
## 0G Storage
|
## 0G Storage
|
||||||
|
|
||||||
0G Storage employs layered design targetting to support different types of decentralized applications. Figure 2 shows the overview of the full stack layers of 0G Storage.
|
0G Storage employs layered design targeting to support different types of decentralized applications. Figure 2 shows the overview of the full stack layers of 0G Storage.
|
||||||
|
|
||||||
<figure><img src="../../.gitbook/assets/zg-storage-layer.png" alt=""><figcaption><p>Figure 2. Full Stack Solution of 0G Storage</p></figcaption></figure>
|
<figure><img src="../../.gitbook/assets/zg-storage-layer.png" alt=""><figcaption><p>Figure 2. Full Stack Solution of 0G Storage</p></figcaption></figure>
|
||||||
|
|
||||||
The lowest is a log layer that is a decentralized system. It consists of multiple storage nodes to form a storage network. The network has built-in incentive mechanism to reward the data storage. The ordering of the uploaded data is guaranteed by a sequencing mechanism to provide a log-based semantics and abstraction. This layer is used to store unstructured raw data for permanent persistency.
|
The lowest is a log layer which is a decentralized system. It consists of multiple storage nodes to form a storage network. The network has built-in incentive mechanism to reward the data storage. The ordering of the uploaded data is guaranteed by a sequencing mechanism to provide a log-based semantics and abstraction. This layer is used to store unstructured raw data for permanent persistency.
|
||||||
|
|
||||||
On top of the log layer, 0G Storage provides a Key-Value store runtime to manage structured data with mutability. Multiple key-value store nodes share the underlying log system. They put the structured key-value data structure into the log entry and append to the log system. They play the log entries in the shared log to construct the consistent state snapshot of the key-value store. The throughput and latency of the key-value store are bounded by the log system, so that the efficiency of the log layer is critical to the performance of the entire system. The key-value store can associate access control information with the keys to manage the update permission for the data. This enables the applications like social network, e.g., decentralized Twitter, which requires the maintenance for the ownership of the messages created by the users.
|
On top of the log layer, 0G Storage provides a Key-Value store runtime to manage structured data with mutability. Multiple key-value store nodes share the underlying log system. They put the structured key-value data structure into the log entry and append to the log system. They play the log entries in the shared log to construct the consistent state snapshot of the key-value store. The throughput and latency of the key-value store are bounded by the log system, so that the efficiency of the log layer is critical to the performance of the entire system. The key-value store can associate access control information with the keys to manage the update permission for the data. This enables the applications like social network, e.g., decentralized Twitter, which requires the maintenance for the ownership of the messages created by the users.
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# Incentive Mechanism
|
# Incentive Mechanism
|
||||||
|
|
||||||
This section describes the incentive mechanism design of the 0G Storage, which consists of two types of actors: users and miners (a.k.a. storage nodes). Users pay tokens (ZG) to create data entries in the log and add data to the network. Miners provide data service and receive tokens (ZG) as reward from the network. The payment from users to miners is mediated by the ZeroGravity network, since the service is sustained by the whole network rather than some specific miner. 0G Storage implements storage service in a "pay once, storage forever" manner. Users pay a one-shot storage endowment for each created data entry, and thereafter the endowment is used to incentivize miners who maintain that data entry.
|
This section describes the incentive mechanism design of the 0G Storage, which consists of two types of actors: users and miners (a.k.a. storage nodes). Users pay tokens (ZG) to create data entries in the log and add data to the network. Miners provide data service and receive tokens (ZG) as a reward from the network. The payment from users to miners is mediated by the ZeroGravity network, since the service is sustained by the whole network rather than some specific miner. 0G Storage implements storage service in a "pay once, storage forever" manner. Users pay a one-shot storage endowment for each created data entry, and thereafter the endowment is used to incentivize miners who maintain that data entry.
|
||||||
|
|
||||||
The storage endowment is maintained per data entry, and a miner is only eligible for storage reward from data entries that he has access to. The total storage reward paid for a data entry is independent from the popularity of that data entry. For instance, a popular data entry stored by many miners will be frequently mined, but the reward is amortized among those miners; on the other hand, a less popular data entry is rarely mined, then the storage reward accumulates and hence induces a higher payoff to miners who store this rare data entry.
|
The storage endowment is maintained per data entry, and a miner is only eligible for storage reward from data entries that he has access to. The total storage reward paid for a data entry is independent from the popularity of that data entry. For instance, a popular data entry stored by many miners will be frequently mined, but the reward is amortized among those miners; on the other hand, a less popular data entry is rarely mined, then the storage reward accumulates and hence induces a higher payoff to miners who store this rare data entry.
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Proof of Random Access
|
# Proof of Random Access
|
||||||
|
|
||||||
The ZeroGravity network adopts a Proof of Random Access (PoRA) mechanism to incentivize miners to store data. By requiring miners to answer randomly produced queries to archived data chunks, the PoRA mechanism establishes the relation between mining proof generation power and data storage. Miners answer the queries repeatedly and computes an output digest for each loaded chunk util find a digest that satisfies the mining difficulty (i.e., has enough leading zeros). PoRA will stress the miners' disk I/O and reduce their capability in responding user queries. So 0G Storage adopts intermittent mining, in which a mining epoch starts with a block generation at a specific block height on the host chain and stops when a valid PoRA is submitted to the 0G Storage contract.
|
The ZeroGravity network adopts a Proof of Random Access (PoRA) mechanism to incentivize miners to store data. By requiring miners to answer randomly produced queries to archived data chunks, the PoRA mechanism establishes the relation between mining proof generation power and data storage. Miners answer the queries repeatedly and computes an output digest for each loaded chunk util find a digest that satisfies the mining difficulty (i.e., has enough leading zeros). PoRA will stress the miners' disk I/O and reduce their capability to respond user queries. So 0G Storage adopts intermittent mining, in which a mining epoch starts with a block generation at a specific block height on the host chain and stops when a valid PoRA is submitted to the 0G Storage contract.
|
||||||
|
|
||||||
In a strawman design, a PoRA iteration consists of a computing stage and a loading stage. In the computing stage, a miner computes a random recall position (the universal offset in the flow) based on an arbitrary picked random nonce and a mining status read from the host chain. In the loading stage, a miner loads the archived data chunks at the given recall position, and computes output digest by hashing the tuple of mining status and the data chunks. If the output digest satisfies the target difficulty, the miner can construct a legitimate PoRA consists of the chosen random nonce, the loaded data chunk and the proof for the correctness of data chunk to the mining contract.
|
In a strawman design, a PoRA iteration consists of a computing stage and a loading stage. In the computing stage, a miner computes a random recall position (the universal offset in the flow) based on an arbitrary picked random nonce and a mining status read from the host chain. In the loading stage, a miner loads the archived data chunks at the given recall position, and computes output digest by hashing the tuple of mining status and the data chunks. If the output digest satisfies the target difficulty, the miner can construct a legitimate PoRA consists of the chosen random nonce, the loaded data chunk and the proof for the correctness of data chunk to the mining contract.
|
||||||
|
|
||||||
@ -14,7 +14,7 @@ The PoRA is designed with the following properties to improve the overall fairne
|
|||||||
|
|
||||||
## Algorithm
|
## Algorithm
|
||||||
|
|
||||||
Precisely, the mining process has following steps:
|
Precisely, the mining process has the following steps:
|
||||||
|
|
||||||
1. Register the miner id on the mining contract
|
1. Register the miner id on the mining contract
|
||||||
2. For each mining epoch, repeat the following steps:
|
2. For each mining epoch, repeat the following steps:
|
||||||
|
@ -51,7 +51,7 @@ For testing, it's also dependent on the following repos:
|
|||||||
|
|
||||||
### Run Tests
|
### Run Tests
|
||||||
|
|
||||||
Go to the `tests` folder and run following command to run all tests:
|
Go to the `tests` folder and run the following command to run all tests:
|
||||||
|
|
||||||
```
|
```
|
||||||
python test_all.py
|
python test_all.py
|
||||||
|
@ -6,7 +6,7 @@ The log layer of 0G Storage provides decentralized storage service via a permiss
|
|||||||
|
|
||||||
The storage state of 0G Storage network is maintained in a smart contract deployed on an existing blockchain. The design of 0G Storage network fully decouples data creation, reward distribution, and token circulation.
|
The storage state of 0G Storage network is maintained in a smart contract deployed on an existing blockchain. The design of 0G Storage network fully decouples data creation, reward distribution, and token circulation.
|
||||||
|
|
||||||
The 0G Storage Contract is responsible for data storage requests processing, data entries creation, and reward distribution.
|
The 0G Storage Contract is responsible for data storage request processing, data entries creation, and reward distribution.
|
||||||
|
|
||||||
- Data storage requests are submitted by users who wish to store data in the 0G Storage network, where each request includes necessary metadata such as data size and commitments, and it comes along with the payment for storage service.
|
- Data storage requests are submitted by users who wish to store data in the 0G Storage network, where each request includes necessary metadata such as data size and commitments, and it comes along with the payment for storage service.
|
||||||
- Data entries are created for accepted data requests, to keep record of stored data.
|
- Data entries are created for accepted data requests, to keep record of stored data.
|
||||||
@ -15,8 +15,8 @@ The 0G Storage Contract is responsible for data storage requests processing, dat
|
|||||||
This embedding design brings significant advantages:
|
This embedding design brings significant advantages:
|
||||||
|
|
||||||
- Simplicity: there is no need to maintain a full-fledged consensus protocol, which reduces complexity and enables 0G Storage to focus on decentralized storage service.
|
- Simplicity: there is no need to maintain a full-fledged consensus protocol, which reduces complexity and enables 0G Storage to focus on decentralized storage service.
|
||||||
- Safety: the consensus is outsourced to the host blockchain, and hence inherits security of the host blockchain. Typically the more developed host blockchain would provide stronger safety guarantee than a newly-built blockchain.
|
- Safety: the consensus is outsourced to the host blockchain, and hence inherits security of the host blockchain. Typically the more developed host blockchain would provide a stronger safety guarantee than a newly-built blockchain.
|
||||||
- Accessibility: every smart contract on the host blockchain is able to access the original state of ZeroGravity directly, without relying on some trusted off-chain notary. This difference is essential comparing to the projection of an external ledger managed by a third-party.
|
- Accessibility: every smart contract on the host blockchain is able to access the original state of ZeroGravity directly, without relying on some trusted off-chain notary. This difference is essential compared to the projection of an external ledger managed by a third-party.
|
||||||
- Composability: 0G tokens can always be transferred directly on the host blockchain, like any other ERC20 tokens. This is much more convenient than typical layer-2 ledgers, where transactions are first processed by layer-2 validators and then committed to the host chain after a significant latency. This feature empowers 0G Storage stronger composability as a new lego to the ecosystem.
|
- Composability: 0G tokens can always be transferred directly on the host blockchain, like any other ERC20 tokens. This is much more convenient than typical layer-2 ledgers, where transactions are first processed by layer-2 validators and then committed to the host chain after a significant latency. This feature empowers 0G Storage stronger composability as a new lego to the ecosystem.
|
||||||
|
|
||||||
## Storage Granularity
|
## Storage Granularity
|
||||||
|
@ -70,7 +70,7 @@ Keep contracts addresses
|
|||||||
|
|
||||||
## Run 0G Storage
|
## Run 0G Storage
|
||||||
|
|
||||||
Update coinfig run/config.toml as required:
|
Update config run/config.toml as required:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
# p2p port
|
# p2p port
|
||||||
|
@ -12,4 +12,4 @@ When an application server with the key-value runtime encounters the commit reco
|
|||||||
|
|
||||||
## Concurrent Assumption
|
## Concurrent Assumption
|
||||||
|
|
||||||
This transaction model assumes that the transaction participants are collaborative and will honestly compose the commit record with correct content. Although this assumption in decentralized environment is too strong, it is still achievable for specific applications. For example, for an application like Google Docs, a user normally shares the access to others who can be trusted. In case this assumption cannot hold, the code of the transaction can be stored in ZeroGravity log and some mechanism of verifiable computation like zero-knowledge proof or hardware with trust execution environment (TEE) can be employed by the transaction executors to detect the validity of the commit record.
|
This transaction model assumes that the transaction participants are collaborative and will honestly compose the commit record with the correct content. Although this assumption in a decentralized environment is too strong, it is still achievable for specific applications. For example, for an application like Google Docs, a user normally shares the access to others who can be trusted. In case this assumption cannot hold, the code of the transaction can be stored in ZeroGravity log and some mechanism of verifiable computation like zero-knowledge proof or hardware with trust execution environment (TEE) can be employed by the transaction executors to detect the validity of the commit record.
|
||||||
|
Loading…
Reference in New Issue
Block a user