mirror of
				https://github.com/0glabs/0g-storage-node.git
				synced 2025-11-04 00:27:39 +00:00 
			
		
		
		
	Compare commits
	
		
			4 Commits
		
	
	
		
			dff854ae46
			...
			aa16d4335d
		
	
	| Author | SHA1 | Date | |
|---|---|---|---|
| 
						 | 
					aa16d4335d | ||
| 
						 | 
					bcbd8b3baa | ||
| 
						 | 
					cae5b62440 | ||
| 
						 | 
					fd9c033176 | 
							
								
								
									
										1
									
								
								.gitignore
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										1
									
								
								.gitignore
									
									
									
									
										vendored
									
									
								
							@ -4,6 +4,5 @@
 | 
				
			|||||||
/.idea
 | 
					/.idea
 | 
				
			||||||
tests/**/__pycache__
 | 
					tests/**/__pycache__
 | 
				
			||||||
tests/tmp/**
 | 
					tests/tmp/**
 | 
				
			||||||
tests/config/zgs
 | 
					 | 
				
			||||||
.vscode/*.json
 | 
					.vscode/*.json
 | 
				
			||||||
/0g-storage-contracts-dev
 | 
					/0g-storage-contracts-dev
 | 
				
			||||||
 | 
				
			|||||||
							
								
								
									
										73
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										73
									
								
								README.md
									
									
									
									
									
								
							@ -2,69 +2,32 @@
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
## Overview
 | 
					## Overview
 | 
				
			||||||
 | 
					
 | 
				
			||||||
0G Storage is the storage layer for the ZeroGravity data availability (DA) system. The 0G Storage layer holds three important features:
 | 
					0G Storage is a decentralized data storage system designed to address the challenges of high-throughput and low-latency data storage and retrieval, in areas such as AI and gaming. 
 | 
				
			||||||
 | 
					
 | 
				
			||||||
* Buit-in - It is natively built into the ZeroGravity DA system for data storage and retrieval.
 | 
					In addition, it forms the storage layer for the 0G data availability (DA) system, with the cross-layer integration abstracted away from Rollup and AppChain builders.
 | 
				
			||||||
* General purpose - It is designed to support atomic transactions, mutable kv stores as well as archive log systems to enable wide range of applications with various data types.
 | 
					 | 
				
			||||||
* Incentive - Instead of being just a decentralized database, 0G Storage introduces PoRA mining algorithm to incentivize storage network participants.
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
To dive deep into the technical details, continue reading [0G Storage Spec.](docs/)
 | 
					## System Architecture
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## Integration
 | 
					0G Storage consists of two main components:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
We provide a [SDK](https://github.com/0glabs/0g-js-storage-sdk) for users to easily integrate 0G Storage in their applications with the following features:
 | 
					1. **Data Publishing Lane**: Ensures quick data availability and verification through the 0G Consensus network.
 | 
				
			||||||
 | 
					2. **Data Storage Lane**: Manages large data transfers and storage using an erasure-coding mechanism for redundancy and reliability.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
* File Merkle Tree Class
 | 
					Across the two lanes, 0G Storage supports the following features:
 | 
				
			||||||
* Flow Contract Types
 | 
					 | 
				
			||||||
* RPC methods support
 | 
					 | 
				
			||||||
* File upload
 | 
					 | 
				
			||||||
* Support browser environment
 | 
					 | 
				
			||||||
* Tests for different environments (In Progress)
 | 
					 | 
				
			||||||
* File download (In Progress)
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
## Deployment
 | 
					* **General Purpose Design**: Supports atomic transactions, mutable key-value stores, and archive log systems, enabling a wide range of applications with various data types.
 | 
				
			||||||
 | 
					* **Incentivized Participation**: Utilizes the PoRA (Proof of Random Access) mining algorithm to incentivize storage network participants.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Please refer to [Deployment](docs/run.md) page for detailed steps to compile and start a 0G Storage node.
 | 
					For in-depth technical details about 0G Storage, please read our [Intro to 0G Storage](https://docs.0g.ai/og-storage).
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## Test
 | 
					## Documentation
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### Prerequisites
 | 
					- If you want to run a node, please refer to the [Running a Node](https://docs.0g.ai/run-a-node/storage-node) guide.
 | 
				
			||||||
 | 
					- If you want build a project using 0G storage, please refer to the [0G Storage SDK](https://docs.0g.ai/build-with-0g/storage-sdk) guide.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
* Required python version: 3.8, 3.9, 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`).
 | 
					## Support and Additional Resources
 | 
				
			||||||
* Install dependencies under root folder: `pip3 install -r requirements.txt`
 | 
					We want to do everything we can to help you be successful while working on your contribution and projects. Here you'll find various resources and communities that may help you complete a project or contribute to 0G. 
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### Dependencies
 | 
					### Communities
 | 
				
			||||||
 | 
					- [0G Telegram](https://t.me/web3_0glabs)
 | 
				
			||||||
Python test framework will launch blockchain fullnodes at local for storage node to interact with. There are 2 kinds of fullnodes supported:
 | 
					- [0G Discord](https://discord.com/invite/0glabs)
 | 
				
			||||||
 | 
					 | 
				
			||||||
* Conflux eSpace node (by default).
 | 
					 | 
				
			||||||
* BSC node (geth).
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
Alternatively, you could also manually copy specific version binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
For testing, it's also dependent on the following repos:
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
* [0G Storage Contract](https://github.com/0glabs/0g-storage-contracts): It essentially provides two abi interfaces for 0G Storage Node to interact with the on-chain contracts.
 | 
					 | 
				
			||||||
  * ZgsFlow: It contains apis to submit chunk data.
 | 
					 | 
				
			||||||
  * PoraMine: It contains apis to submit PoRA answers.
 | 
					 | 
				
			||||||
* [0G Storage Client](https://github.com/0glabs/0g-storage-client): It is used to interact with certain 0G Storage Nodes to upload/download files.
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
### Run Tests
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
Go to the `tests` folder and run the following command to run all tests:
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
python test_all.py
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
or, run any single test, e.g.
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
python sync_test.py
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
## Contributing
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
To make contributions to the project, please follow the guidelines [here](contributing.md).
 | 
					 | 
				
			||||||
 | 
				
			|||||||
@ -592,7 +592,7 @@ where
 | 
				
			|||||||
                // peer that originally published the message.
 | 
					                // peer that originally published the message.
 | 
				
			||||||
                match PubsubMessage::decode(&gs_msg.topic, &gs_msg.data) {
 | 
					                match PubsubMessage::decode(&gs_msg.topic, &gs_msg.data) {
 | 
				
			||||||
                    Err(e) => {
 | 
					                    Err(e) => {
 | 
				
			||||||
                        debug!(topic = ?gs_msg.topic, error = ?e, "Could not decode gossipsub message");
 | 
					                        debug!(topic = ?gs_msg.topic, %propagation_source, error = ?e, "Could not decode gossipsub message");
 | 
				
			||||||
                        //reject the message
 | 
					                        //reject the message
 | 
				
			||||||
                        if let Err(e) = self.gossipsub.report_message_validation_result(
 | 
					                        if let Err(e) = self.gossipsub.report_message_validation_result(
 | 
				
			||||||
                            &id,
 | 
					                            &id,
 | 
				
			||||||
@ -601,6 +601,24 @@ where
 | 
				
			|||||||
                        ) {
 | 
					                        ) {
 | 
				
			||||||
                            warn!(message_id = %id, peer_id = %propagation_source, error = ?e, "Failed to report message validation");
 | 
					                            warn!(message_id = %id, peer_id = %propagation_source, error = ?e, "Failed to report message validation");
 | 
				
			||||||
                        }
 | 
					                        }
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					                        self.peer_manager.report_peer(
 | 
				
			||||||
 | 
					                            &propagation_source,
 | 
				
			||||||
 | 
					                            PeerAction::Fatal,
 | 
				
			||||||
 | 
					                            ReportSource::Gossipsub,
 | 
				
			||||||
 | 
					                            None,
 | 
				
			||||||
 | 
					                            "gossipsub message decode error",
 | 
				
			||||||
 | 
					                        );
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					                        if let Some(source) = &gs_msg.source {
 | 
				
			||||||
 | 
					                            self.peer_manager.report_peer(
 | 
				
			||||||
 | 
					                                source,
 | 
				
			||||||
 | 
					                                PeerAction::Fatal,
 | 
				
			||||||
 | 
					                                ReportSource::Gossipsub,
 | 
				
			||||||
 | 
					                                None,
 | 
				
			||||||
 | 
					                                "gossipsub message decode error",
 | 
				
			||||||
 | 
					                            );
 | 
				
			||||||
 | 
					                        }
 | 
				
			||||||
                    }
 | 
					                    }
 | 
				
			||||||
                    Ok(msg) => {
 | 
					                    Ok(msg) => {
 | 
				
			||||||
                        // Notify the network
 | 
					                        // Notify the network
 | 
				
			||||||
 | 
				
			|||||||
							
								
								
									
										1
									
								
								tests/config/zgs/network/enr.dat
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										1
									
								
								tests/config/zgs/network/enr.dat
									
									
									
									
									
										Normal file
									
								
							@ -0,0 +1 @@
 | 
				
			|||||||
 | 
					enr:-Ly4QJZwz9htAorBIx_otqoaRFPohX7NQJ31iBB6mcEhBiuPWsOnigc1ABQsg6tLU1OirQdLR6aEvv8SlkkfIbV72T8CgmlkgnY0gmlwhH8AAAGQbmV0d29ya19pZGVudGl0eZ8oIwAAAAAAADPyz8cpvYcPpUtQMmYOBrTPKn-UAAIAiXNlY3AyNTZrMaEDeDdgnDgLPkxNxB39jKb9f1Na30t6R9vVolpTk5zu-hODdGNwgir4g3VkcIIq-A
 | 
				
			||||||
							
								
								
									
										1
									
								
								tests/config/zgs/network/key
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										1
									
								
								tests/config/zgs/network/key
									
									
									
									
									
										Normal file
									
								
							@ -0,0 +1 @@
 | 
				
			|||||||
 | 
					Y<13><><02><><EFBFBD>Ң<>-
<0A><>r<>7<EFBFBD><37>jq<6A>p<>}<7D>
 | 
				
			||||||
@ -40,14 +40,18 @@ class PrunerTest(TestFramework):
 | 
				
			|||||||
        for i in range(len(segments)):
 | 
					        for i in range(len(segments)):
 | 
				
			||||||
            client_index = i % 2
 | 
					            client_index = i % 2
 | 
				
			||||||
            self.nodes[client_index].zgs_upload_segment(segments[i])
 | 
					            self.nodes[client_index].zgs_upload_segment(segments[i])
 | 
				
			||||||
 | 
					        wait_until(lambda: self.nodes[0].zgs_get_file_info(data_root) is not None)
 | 
				
			||||||
        wait_until(lambda: self.nodes[0].zgs_get_file_info(data_root)["finalized"])
 | 
					        wait_until(lambda: self.nodes[0].zgs_get_file_info(data_root)["finalized"])
 | 
				
			||||||
 | 
					        wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root) is not None)
 | 
				
			||||||
        wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root)["finalized"])
 | 
					        wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root)["finalized"])
 | 
				
			||||||
 | 
					
 | 
				
			||||||
        self.nodes[2].admin_start_sync_file(0)
 | 
					        self.nodes[2].admin_start_sync_file(0)
 | 
				
			||||||
        self.nodes[3].admin_start_sync_file(0)
 | 
					        self.nodes[3].admin_start_sync_file(0)
 | 
				
			||||||
        wait_until(lambda: self.nodes[2].sync_status_is_completed_or_unknown(0))
 | 
					        wait_until(lambda: self.nodes[2].sync_status_is_completed_or_unknown(0))
 | 
				
			||||||
 | 
					        wait_until(lambda: self.nodes[2].zgs_get_file_info(data_root) is not None)
 | 
				
			||||||
        wait_until(lambda: self.nodes[2].zgs_get_file_info(data_root)["finalized"])
 | 
					        wait_until(lambda: self.nodes[2].zgs_get_file_info(data_root)["finalized"])
 | 
				
			||||||
        wait_until(lambda: self.nodes[3].sync_status_is_completed_or_unknown(0))
 | 
					        wait_until(lambda: self.nodes[3].sync_status_is_completed_or_unknown(0))
 | 
				
			||||||
 | 
					        wait_until(lambda: self.nodes[3].zgs_get_file_info(data_root) is not None)
 | 
				
			||||||
        wait_until(lambda: self.nodes[3].zgs_get_file_info(data_root)["finalized"])
 | 
					        wait_until(lambda: self.nodes[3].zgs_get_file_info(data_root)["finalized"])
 | 
				
			||||||
 | 
					
 | 
				
			||||||
        for i in range(len(segments)):
 | 
					        for i in range(len(segments)):
 | 
				
			||||||
 | 
				
			|||||||
@ -128,10 +128,14 @@ class TestNode:
 | 
				
			|||||||
        poll_per_s = 4
 | 
					        poll_per_s = 4
 | 
				
			||||||
        for _ in range(poll_per_s * self.rpc_timeout):
 | 
					        for _ in range(poll_per_s * self.rpc_timeout):
 | 
				
			||||||
            if self.process.poll() is not None:
 | 
					            if self.process.poll() is not None:
 | 
				
			||||||
 | 
					                self.stderr.seek(0)
 | 
				
			||||||
 | 
					                self.stdout.seek(0)
 | 
				
			||||||
                raise FailedToStartError(
 | 
					                raise FailedToStartError(
 | 
				
			||||||
                    self._node_msg(
 | 
					                    self._node_msg(
 | 
				
			||||||
                        "exited with status {} during initialization".format(
 | 
					                        "exited with status {} during initialization \n\nstderr: {}\n\nstdout: {}\n\n".format(
 | 
				
			||||||
                            self.process.returncode
 | 
					                            self.process.returncode,
 | 
				
			||||||
 | 
					                            self.stderr.read(),
 | 
				
			||||||
 | 
					                            self.stdout.read(),
 | 
				
			||||||
                        )
 | 
					                        )
 | 
				
			||||||
                    )
 | 
					                    )
 | 
				
			||||||
                )
 | 
					                )
 | 
				
			||||||
 | 
				
			|||||||
@ -3,14 +3,9 @@ import subprocess
 | 
				
			|||||||
import tempfile
 | 
					import tempfile
 | 
				
			||||||
 | 
					
 | 
				
			||||||
from test_framework.blockchain_node import BlockChainNodeType, BlockchainNode
 | 
					from test_framework.blockchain_node import BlockChainNodeType, BlockchainNode
 | 
				
			||||||
from utility.utils import blockchain_rpc_port, arrange_port
 | 
					from utility.utils import blockchain_p2p_port, blockchain_rpc_port, blockchain_ws_port, blockchain_rpc_port_tendermint, pprof_port
 | 
				
			||||||
from utility.build_binary import build_zg
 | 
					from utility.build_binary import build_zg
 | 
				
			||||||
 | 
					
 | 
				
			||||||
ZGNODE_PORT_CATEGORY_WS = 0
 | 
					 | 
				
			||||||
ZGNODE_PORT_CATEGORY_P2P = 1
 | 
					 | 
				
			||||||
ZGNODE_PORT_CATEGORY_RPC = 2
 | 
					 | 
				
			||||||
ZGNODE_PORT_CATEGORY_PPROF = 3
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
def zg_node_init_genesis(binary: str, root_dir: str, num_nodes: int):
 | 
					def zg_node_init_genesis(binary: str, root_dir: str, num_nodes: int):
 | 
				
			||||||
    assert num_nodes > 0, "Invalid number of blockchain nodes: %s" % num_nodes
 | 
					    assert num_nodes > 0, "Invalid number of blockchain nodes: %s" % num_nodes
 | 
				
			||||||
 | 
					
 | 
				
			||||||
@ -26,7 +21,7 @@ def zg_node_init_genesis(binary: str, root_dir: str, num_nodes: int):
 | 
				
			|||||||
    os.mkdir(zgchaind_dir)
 | 
					    os.mkdir(zgchaind_dir)
 | 
				
			||||||
    
 | 
					    
 | 
				
			||||||
    log_file = tempfile.NamedTemporaryFile(dir=zgchaind_dir, delete=False, prefix="init_genesis_", suffix=".log")
 | 
					    log_file = tempfile.NamedTemporaryFile(dir=zgchaind_dir, delete=False, prefix="init_genesis_", suffix=".log")
 | 
				
			||||||
    p2p_port_start = arrange_port(ZGNODE_PORT_CATEGORY_P2P, 0)
 | 
					    p2p_port_start = blockchain_p2p_port(0)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
    ret = subprocess.run(
 | 
					    ret = subprocess.run(
 | 
				
			||||||
        args=["bash", shell_script, zgchaind_dir, str(num_nodes), str(p2p_port_start)],
 | 
					        args=["bash", shell_script, zgchaind_dir, str(num_nodes), str(p2p_port_start)],
 | 
				
			||||||
@ -71,13 +66,13 @@ class ZGNode(BlockchainNode):
 | 
				
			|||||||
            # overwrite json rpc http port: 8545
 | 
					            # overwrite json rpc http port: 8545
 | 
				
			||||||
            "--json-rpc.address", "127.0.0.1:%s" % blockchain_rpc_port(index),
 | 
					            "--json-rpc.address", "127.0.0.1:%s" % blockchain_rpc_port(index),
 | 
				
			||||||
            # overwrite json rpc ws port: 8546
 | 
					            # overwrite json rpc ws port: 8546
 | 
				
			||||||
            "--json-rpc.ws-address", "127.0.0.1:%s" % arrange_port(ZGNODE_PORT_CATEGORY_WS, index),
 | 
					            "--json-rpc.ws-address", "127.0.0.1:%s" % blockchain_ws_port(index),
 | 
				
			||||||
            # overwrite p2p port: 26656
 | 
					            # overwrite p2p port: 26656
 | 
				
			||||||
            "--p2p.laddr", "tcp://127.0.0.1:%s" % arrange_port(ZGNODE_PORT_CATEGORY_P2P, index),
 | 
					            "--p2p.laddr", "tcp://127.0.0.1:%s" % blockchain_p2p_port(index),
 | 
				
			||||||
            # overwrite rpc port: 26657
 | 
					            # overwrite rpc port: 26657
 | 
				
			||||||
            "--rpc.laddr", "tcp://127.0.0.1:%s" % arrange_port(ZGNODE_PORT_CATEGORY_RPC, index),
 | 
					            "--rpc.laddr", "tcp://127.0.0.1:%s" % blockchain_rpc_port_tendermint(index),
 | 
				
			||||||
            # overwrite pprof port: 6060
 | 
					            # overwrite pprof port: 6060
 | 
				
			||||||
            "--rpc.pprof_laddr", "127.0.0.1:%s" % arrange_port(ZGNODE_PORT_CATEGORY_PPROF, index),
 | 
					            "--rpc.pprof_laddr", "127.0.0.1:%s" % pprof_port(index),
 | 
				
			||||||
            "--log_level", "debug"
 | 
					            "--log_level", "debug"
 | 
				
			||||||
        ]
 | 
					        ]
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
				
			|||||||
@ -11,6 +11,7 @@ class PortMin:
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
MAX_NODES = 100
 | 
					MAX_NODES = 100
 | 
				
			||||||
 | 
					MAX_BLOCKCHAIN_NODES = 50
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def p2p_port(n):
 | 
					def p2p_port(n):
 | 
				
			||||||
@ -23,18 +24,25 @@ def rpc_port(n):
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def blockchain_p2p_port(n):
 | 
					def blockchain_p2p_port(n):
 | 
				
			||||||
    return PortMin.n + 2 * MAX_NODES + n
 | 
					    assert n <= MAX_BLOCKCHAIN_NODES
 | 
				
			||||||
 | 
					    return PortMin.n + MAX_NODES + MAX_BLOCKCHAIN_NODES + n
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def blockchain_rpc_port(n):
 | 
					def blockchain_rpc_port(n):
 | 
				
			||||||
    return PortMin.n + 3 * MAX_NODES + n
 | 
					    return PortMin.n + MAX_NODES + 2 * MAX_BLOCKCHAIN_NODES + n
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def blockchain_rpc_port_core(n):
 | 
					def blockchain_rpc_port_core(n):
 | 
				
			||||||
    return PortMin.n + 4 * MAX_NODES + n
 | 
					    return PortMin.n + MAX_NODES + 3 * MAX_BLOCKCHAIN_NODES + n
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def arrange_port(category: int, node_index: int) -> int:
 | 
					def blockchain_ws_port(n):
 | 
				
			||||||
    return PortMin.n + (100 + category) * MAX_NODES + node_index
 | 
					    return PortMin.n + MAX_NODES + 4 * MAX_BLOCKCHAIN_NODES + n
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def blockchain_rpc_port_tendermint(n):
 | 
				
			||||||
 | 
					    return PortMin.n + MAX_NODES + 5 * MAX_BLOCKCHAIN_NODES + n
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def pprof_port(n):
 | 
				
			||||||
 | 
					    return PortMin.n + MAX_NODES + 6 * MAX_BLOCKCHAIN_NODES + n
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def wait_until(predicate, *, attempts=float("inf"), timeout=float("inf"), lock=None):
 | 
					def wait_until(predicate, *, attempts=float("inf"), timeout=float("inf"), lock=None):
 | 
				
			||||||
    if attempts == float("inf") and timeout == float("inf"):
 | 
					    if attempts == float("inf") and timeout == float("inf"):
 | 
				
			||||||
 | 
				
			|||||||
		Loading…
	
		Reference in New Issue
	
	Block a user