v1.4.18-patch-1 (#216)

* experiment: verify in channel (#215)

* Change volume mapping so .config folder is created inside node folder and not on root (#214)

* Update main.go to fix Q logo (#213)

Q logo is not appearing correctly on the terminal while running node. Added a new line character after "Signature check passed" to fix it

* switched get node info response to use masterClock frame for maxFrame field (#212)

* fix: keys file remains null (#217)

* Revert "Change volume mapping so .config folder is created inside node folder…" (#218)

This reverts commit 27f50a92c6f5e340fd4106da828c6e8cdc12116b.

* Docker split take 2 (#219)

* split runtime docker files into a docker subfolder

* split DOCKER-README.md

* updated docker instructions

* add restore command

* add image update related tasks

* add command to test if P2P port is visible

* Remove bootstrap peer (#189)

* Change bootstrap servers to DHT-only peers (#187)

* support voucher file-based claims (#183)

* Change bootstrap servers to DHT-only peers

Changing my bootstrap servers to DHT-only peers with somewhat lower
specs. One of the new ones is in the US and the other one is in
Switzerland. Both use reliable providers and have 10Gbps network
interfaces.

---------

Co-authored-by: Cassandra Heart <7929478+CassOnMars@users.noreply.github.com>

* Don't run self-test in DHT-only mode (#186)

* support voucher file-based claims (#183)

* Don't run self-test in DHT-only mode

The node tries to create a self-test when ran with the `-dht-only`
flag, but it doesn't load the KZG ceremony data in DHT-only mode
which leads to a crash.

Don't run self-test when the `-dht-only` flag is set.

I tested by starting a node locally with and without existing
self-test and with the `-dht-only` flag.

---------

Co-authored-by: Cassandra Heart <7929478+CassOnMars@users.noreply.github.com>

* Embed json files in binary (#182)

* Embed ceremony.json in binary

* Embed retroactive_peers.json in binary

* Signers build and verification tasks (#181)

* add signers specific Taskfile

* add verify tasks

* move signer task under signer folder

* create docker image specific for signers

* map current user into docker image and container

* ignore node-tmp-*

* add verify:build:internal

* prevent tasks with docker commands from being run inside a container

* rename *:internal to *:container

* add README.md

* add pem files to git

* Updating Q Guide link (#173)

* Update README.md

Updated link to Quilibrium guide to new website

* Update README.md

---------

Co-authored-by: littleblackcloud <163544315+littleblackcloud@users.noreply.github.com>
Co-authored-by: Agost Biro <5764438+agostbiro@users.noreply.github.com>
Co-authored-by: Cassandra Heart <7929478+CassOnMars@users.noreply.github.com>
Co-authored-by: Demipoet <161999657+demipoet@users.noreply.github.com>

* Signer related fixes (#220)

* add pems 16 and 17

* remove .bin extension from generated binaries

* no more json files to copy to docker image

* feat: recalibrate self-test on the fly (#221)

* fix: switch RPC for peer and node info (#222)

* replace binaries with patch build

* add digests

* Signatory #13 added

* Signatory #4 added (#223)

* Signatory #14 added

* Signatory #17 added

* Signatory #12 added

* Signatory #3 added

* Signatory #2 added

* Signatory #16 added

* Signatory #1 added

* Signatory #8 added

* remove binaries, release ready

---------

Co-authored-by: AvAcalho <158583728+AvAcalho@users.noreply.github.com>
Co-authored-by: Ravish Ahmad <ravishahmad16@gmail.com>
Co-authored-by: luk <luk@luktech.dev>
Co-authored-by: Marius Scurtescu <marius.scurtescu@gmail.com>
Co-authored-by: littleblackcloud <163544315+littleblackcloud@users.noreply.github.com>
Co-authored-by: Agost Biro <5764438+agostbiro@users.noreply.github.com>
Co-authored-by: Demipoet <161999657+demipoet@users.noreply.github.com>
Co-authored-by: 0xOzgur <29779769+0xOzgur@users.noreply.github.com>
This commit is contained in:
Cassandra Heart 2024-05-27 00:10:15 -05:00 committed by GitHub
parent 2bbd1e0690
commit 13bac91367
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
58 changed files with 575 additions and 348 deletions

View File

@ -1,15 +1,3 @@
# Use a custom docker image name # Use a custom docker image name
# Default: quilibrium # Default: quilibrium
QUILIBRIUM_IMAGE_NAME= QUILIBRIUM_IMAGE_NAME=
# Use a custom P2P port.
# Default: 8336
QUILIBRIUM_P2P_PORT=
# Use a custom gRPC port.
# Default: 8337
QUILIBRIUM_GRPC_PORT=
# Use a custom REST port.
# Default: 8338
QUILIBRIUM_REST_PORT=

View File

@ -1,81 +1,37 @@
# Quilibrium Docker Instructions # Quilibrium Docker Instructions
## WARNING
> [!WARNING]
> The Quilibrium docker container requires host configuration changes.
There are extreme buffering requirements, especially during sync, and these in turn require `sysctl`
configuration changes that unfortunately are not supported by Docker. But if these changes are made on
the host machine, then luckily containers seem to automatically have the larger buffers.
The buffer related `sysctl` settings are `net.core.rmem_max` and `net.core.wmem_max` and they both
should be set to `600,000,000` bytes. This value allows pre-buffering of the entire maximum payload
for sync.
You can tell that the buffer size is not large enough by noticing this log entry at beginning when
Quilibrium starts, a few lines below the large logo:
> failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB).
> See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.
To read the currently set values:
```shell
sysctl -n net.core.rmem_max
sysctl -n net.core.wmem_max
```
To set new values, this is not a persistent change:
```shell
sudo sysctl -w net.core.rmem_max=600000000
sudo sysctl -w net.core.wmem_max=600000000
```
To persistently set the new values add a configuration file named `20-quilibrium.conf` to
`/etc/sysctl.d/`. The file content should be:
```
# Quilibrium buffering requirements, especially during sync.
# The value could be as low as 26214400, but everything would be slower.
net.core.rmem_max = 600000000
net.core.wmem_max = 600000000
```
## Build ## Build
The only requirements are `git` (to checkout the repository) and docker (to build the image and run the container). The only requirements are `git` (to checkout the repository) and docker (to build the image).
Golang does not have to be installed, the docker image build process uses a build stage that provides the Golang does not have to be installed, the docker image build process uses a build stage that provides the
correct Go environment and compiles the node down to one command. correct Go environment and compiles the node down to one command.
In the repository root folder, where the [Dockerfile](Dockerfile) file is, build the docker image: In the repository root folder, where the [Dockerfile](Dockerfile) file is, build the docker image:
```shell ```shell
docker build --build-arg GIT_COMMIT=$(git log -1 --format=%h) -t quilibrium -t quilibrium:1.4.2 . docker build --build-arg GIT_COMMIT=$(git log -1 --format=%h) -t quilibrium -t quilibrium:1.4.16 .
``` ```
Use latest version instead of `1.4.2`. Use latest version instead of `1.4.16`.
> [!TIP] The image that is built is light and safe. It is based on Alpine Linux with the Quilibrium node binary, no
> You can use the `task build` command instead. See the [Task](#task) section below.
The image that is built is light and safe. It is based on Alpine Linux with the Quilibrium node binary, not the
source code, nor the Go development environment. The image also has the `grpcurl` tool that can be used to source code, nor the Go development environment. The image also has the `grpcurl` tool that can be used to
query the gRPC interface. query the gRPC interface.
### Task ### Task
You can also use the [Task](https://taskfile.dev/) tool, it a simple build tool that takes care of extracting You can also use the [Task](https://taskfile.dev/) tool, it is a simple build tool that takes care of extracting
parameters, building the image and running the container. The tasks are all defined in [Taskfile.yaml](Taskfile.yaml). parameters and building the image. The tasks are all defined in [Taskfile.yaml](Taskfile.yaml).
You can optionally create an `.env` file, in the same repository root folder to override specific parameters. Right now You can optionally create an `.env` file, in the same repository root folder to override specific parameters. Right now
only one optional env var is supported and that is `QUILIBRIUM_IMAGE_NAME`, if you want to change the default only one optional env var is supported and that is `QUILIBRIUM_IMAGE_NAME`, if you want to change the default
image name from `quilibrium` to something else. If you are pushing your images to Github then you have to follow the image name from `quilibrium` to something else. If you are pushing your images to GitHub then you have to follow the
Github naming convention and use a name like `ghcr.io/mscurtescu/ceremonyclient`. GitHub naming convention and use a name like `ghcr.io/mscurtescu/ceremonyclient`.
Bellow there are example interaction with `Task`. Bellow there are example interactions with `Task`.
The node version is extracted from [node/main.go](node/main.go). This version string is used to tag the image. The git The node version is extracted from [node/main.go](node/main.go). This version string is used to tag the image. The git
repo, branch and commit are read throught the `git` command and depend on the current state of your working repo, branch and commit are read through the `git` command and depend on the current state of your working
directory (one what branch and at what commit you are). These last three values are used to label the image. directory (on what branch and at what commit you are). These last three values are used to label the image.
List tasks: List tasks:
```shell ```shell
@ -94,134 +50,4 @@ task build
## Run ## Run
You can run Quilibrium on the same machine where you built the image, from the same repository root In order to run a Quilibrium node using the docker image follow the instructions in the [docker](docker) subfolder.
folder where [docker-compose.yml](docker-compose.yml) is.
You can also copy `docker-compose.yml` to a new folder on a server and run it there. In this case you
have to have a way to push your image to a Docker image repo and then pull that image on the server.
Github offers such an image repo and a way to push and pull images using special authentication
tokens. See
[Working with the Container registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry).
Run Quilibrium in a container:
```shell
docker compose up -d
```
> [!TIP]
> You can alternatively use the `task up` command. See the [Task](#task-1) section above.
A `.config/` subfolder will be created under the current folder, this is mapped inside the container.
Make sure you backup `config.yml` and `keys.yml`.
### Task
Similarly to building the image you can also use `Task`.
Start the container through docker compose:
```shell
task up
```
Show the logs through docker compose:
```shell
task logs
```
Drop into a shell inside the running container:
```shell
task shell
```
Stop the running container(s):
```shell
task down
```
Backup the critical configuration:
```shell
task backup
```
The above command will create a `backup.tar.gz` archive in the current folder, you still have to copy this
file from the server into a safe location. The command adds the `config.yml` and `keys.yml` files from
the `.config/` subfolder to the archive, with the ownership of the current user.
### Resource management
To ensure that your client performs optimally within a specific resource configuration, you can specify
resource limits and reservations in the node configuration as illustrated below.
This configuration helps in deploying the client with controlled resource usage, such as CPU and memory,
to avoid overconsumption of resources in your environment.
The [docker-compose.yml](docker-compose.yml) file already specifies resources following the currently
recommended hardware requirements.
```yaml
services:
node:
# Some other configuration sections here
deploy:
resources:
limits:
cpus: '4' # Maximum CPU count that the container can use
memory: '16G' # Maximum memory that the container can use
reservations:
cpus: '2' # CPU count that the container initially requests
memory: '8G' # Memory that the container initially request
```
### Customizing docker-compose.yml
If you want to change certain parameters in [docker-compose.yml](docker-compose.yml) it is better not
to edit the file directly as new versions pushed through git would overwrite your changes. A more
flexible solution is to create another file called `docker-compose.override.yml` right next to it
and specifying the necessary overriding changes there.
For example:
```yaml
services:
node:
image: ghcr.io/mscurtescu/ceremonyclient
restart: on-failure:7
```
The above will override the image name and also the restart policy.
To check if your overrides are being picked up run the following command:
```shell
docker compose config
```
This will output the merged and canonical compose file that will be used to run the container(s).
## Interact with a running container
Drop into a shell inside a running container:
```shell
docker compose exec -it node sh
```
Watch the logs:
```shell
docker compose logs -f
```
Get the node related info (peer id, version, max frame and balance):
```shell
docker compose exec node node -node-info
```
Run the DB console:
```shell
docker compose exec node node -db-console
```
Run the Quilibrium client:
```shell
docker compose exec node qclient help
docker compose exec node qclient token help
docker compose exec node qclient token balance
```

View File

@ -36,8 +36,6 @@ LABEL org.opencontainers.image.revision=$GIT_COMMIT
COPY --from=build /go/bin/node /usr/local/bin COPY --from=build /go/bin/node /usr/local/bin
COPY --from=build /go/bin/grpcurl /usr/local/bin COPY --from=build /go/bin/grpcurl /usr/local/bin
COPY --from=build /opt/ceremonyclient/node/ceremony.json /root
COPY --from=build /opt/ceremonyclient/node/retroactive_peers.json /root
COPY --from=build /opt/ceremonyclient/client/qclient /usr/local/bin COPY --from=build /opt/ceremonyclient/client/qclient /usr/local/bin
WORKDIR /root WORKDIR /root

View File

@ -8,8 +8,6 @@ dotenv:
vars: vars:
VERSION: VERSION:
sh: cat node/config/version.go | grep -A 1 "func GetVersion() \[\]byte {" | grep -Eo '0x[0-9a-fA-F]+' | xargs printf "%d.%d.%d" sh: cat node/config/version.go | grep -A 1 "func GetVersion() \[\]byte {" | grep -Eo '0x[0-9a-fA-F]+' | xargs printf "%d.%d.%d"
PROJECT_NAME: quilibrium
SERVICE_NAME: node
GIT_REPO: GIT_REPO:
sh: git config --get remote.origin.url | sed 's/\.git$//' sh: git config --get remote.origin.url | sed 's/\.git$//'
GIT_BRANCH: GIT_BRANCH:
@ -46,54 +44,6 @@ tasks:
${QUILIBRIUM_IMAGE_NAME:-quilibrium}:{{.VERSION}} \ ${QUILIBRIUM_IMAGE_NAME:-quilibrium}:{{.VERSION}} \
>/dev/null 2>/dev/null >/dev/null 2>/dev/null
up:
desc: Run a new Quilibrium container, through docker compose.
cmds:
- docker compose up -d
down:
desc: Take down the Quilibrium container, through docker compose.
cmds:
- docker compose down
shell:
desc: Drop into a shell inside the running container.
cmds:
- docker compose exec -it {{.SERVICE_NAME}} sh
logs:
desc: Print the logs of the running Quilibrium container.
cmds:
- docker compose logs -f
logs-folder:
desc: Show where Docker stores the logs for the Quilibrium node. You need root permissions to access the folder.
cmds:
- "docker container inspect {{.PROJECT_NAME}}-{{.SERVICE_NAME}}-1 | grep LogPath | cut -d : -f 2 | cut -d '\"' -f 2 | xargs dirname"
backup:
desc: Create a backup file with the critical configuration files.
prompt: You will be prompted for root access. Make sure you verify the generated backup file. Continue?
sources:
- '.config/config.yml'
- '.config/keys.yml'
outputs:
- 'backup.tar.gz'
cmds:
- |
export TMP_DIR=$(mktemp -d)
export TASK_DIR=$(pwd)
sudo cp .config/config.yml $TMP_DIR
sudo cp .config/keys.yml $TMP_DIR
sudo chown $(whoami):$(id -gn) $TMP_DIR/*
cd $TMP_DIR
tar -czf $TASK_DIR/backup.tar.gz *
cd $TASK_DIR
sudo rm -rf $TMP_DIR
echo "Backup saved to: backup.tar.gz"
echo "Do not assume you have a backup unless you verify it!!!"
silent: true
github:login: github:login:
desc: Login to GitHub container registry. desc: Login to GitHub container registry.
cmds: cmds:

18
docker/.env.example Normal file
View File

@ -0,0 +1,18 @@
# Use a custom docker image name
# Default: quilibrium
QUILIBRIUM_IMAGE_NAME=
# Use a custom P2P port.
# Default: 8336
QUILIBRIUM_P2P_PORT=
# Use a custom gRPC port.
# Default: 8337
QUILIBRIUM_GRPC_PORT=
# Use a custom REST port.
# Default: 8338
QUILIBRIUM_REST_PORT=
# The public DNS name or IP address for this Quilibrium node.
NODE_PUBLIC_NAME=

165
docker/README.md Normal file
View File

@ -0,0 +1,165 @@
# Quilibrium Docker Instructions
## Install Docker on a Server
> [!IMPORTANT]
> You have to install Docker Engine on your server, you don't want to install Docker Desktop.
The official Linux installation instructions start here:
https://docs.docker.com/engine/install/
For Ubuntu you can start here:
https://docs.docker.com/engine/install/ubuntu/
While there are several installation methods, you really want to use the apt repository, this way you get
automatic updates.
Make sure you also follow the Linux post-installation steps:
https://docs.docker.com/engine/install/linux-postinstall/
## Install Docker on a Desktop
For a Linux desktop follow the server installation steps above, do not install Docker Desktop for Linux unless
you know what you are doing.
For Mac and Windows follow the corresponding Docker Desktop installation links from the top of:
https://docs.docker.com/engine/install/
## Running a Node
Copy [docker-compose.yml](docker-compose.yml) to a new folder on a server. The official
Docker image provided by Quilibrium Network will be pulled.
A `.config/` subfolder will be created in this folder, this will hold both configuration
and the node storage.
Optionally you can also copy [Taskfile.yaml](Taskfile.yaml) and [.env.example](.env.example) to the
server, if you are planning to use them. See below.
### New Instance
If you are starting a brand new node then simply run Quilibrium in a container with:
```shell
docker compose up -d
```
A `.config/` subfolder will be created under the current folder, this is mapped inside the container.
> [!IMPORTANT]
> Once the node is running (the `-node-info` command shows a balance) make sure you backup
> `config.yml` and `keys.yml`.
### Restore Previous Instance
If you have both `config.yml` and `keys.yml` backed up from a previous instance then follow these
steps to restore them:
1. Create an empty `.config/` subfolder.
2. Copy `config.yml` and `keys.yml` to `.config/`.
3. Start the node with:
```shell
docker compose up -d
```
### Task
You can also use the [Task](https://taskfile.dev/) tool, it is a simple build tool that takes care of running
complex commands and intereacting with the container. The tasks are all defined in
[Taskfile.yaml](Taskfile.yaml).
You can optionally create an `.env` file, in the same folder to override specific parameters. Right now
only one optional env var is supported with `Task` and that is `QUILIBRIUM_IMAGE_NAME`, if you want to change the
default image name from `quilibrium` to something else. If you are pushing your images to GitHub, for example, then you
have to follow the GitHub naming convention and use a name like `ghcr.io/mscurtescu/ceremonyclient`. See the
[.env.example](.env.example) sample file, and keep in mind that `.env` is shared with
[docker-compose.yml](docker-compose.yml).
Bellow there are example interactions with `Task`.
Start the container through docker compose:
```shell
task up
```
Show the logs through docker compose:
```shell
task logs
```
Drop into a shell inside the running container:
```shell
task shell
```
Stop the running container(s):
```shell
task down
```
Backup the critical configuration:
```shell
task backup
```
The above command will create a `backup.tar.gz` archive in the current folder, you still have to copy this
file from the server into a safe location. The command adds the `config.yml` and `keys.yml` files from
the `.config/` subfolder to the archive, with the ownership of the current user.
## Customizing docker-compose.yml
If you want to change certain parameters in [docker-compose.yml](docker-compose.yml) it is better not
to edit the file directly as new versions pushed through git would overwrite your changes. A more
flexible solution is to create another file called `docker-compose.override.yml` right next to it
and specifying the necessary overriding changes there.
For example:
```yaml
services:
node:
image: ghcr.io/mscurtescu/ceremonyclient
restart: on-failure:7
```
The above will override the image name and also the restart policy.
You can optionally create an `.env` file, in the same folder to override specific parameters. See the
[.env.example](.env.example) sample file, and keep in mind that `.env` is shared with
[Taskfile.yaml](Taskfile.yaml). You can customize the image name and port mappings.
To check if your overrides are being picked up run the following command:
```shell
docker compose config
```
This will output the merged and canonical compose file that will be used to run the container(s).
## Interact with a running container
Drop into a shell inside a running container:
```shell
docker compose exec -it node sh
```
Watch the logs:
```shell
docker compose logs -f
```
Get the node related info (peer id, version, max frame and balance):
```shell
docker compose exec node node -node-info
```
Run the DB console:
```shell
docker compose exec node node -db-console
```
Run the Quilibrium client:
```shell
docker compose exec node qclient help
docker compose exec node qclient token help
docker compose exec node qclient token balance
```

114
docker/Taskfile.yaml Normal file
View File

@ -0,0 +1,114 @@
# https://taskfile.dev
version: '3'
dotenv:
- '.env'
vars:
PROJECT_NAME: quilibrium
SERVICE_NAME: node
tasks:
up:
desc: Run a new Quilibrium and related containers, through docker compose.
cmds:
- docker compose up -d
down:
desc: Take down the Quilibrium containers, through docker compose.
cmds:
- docker compose down
pull:
desc: Pull new Docker images corresponding to the Quilibrium containers, through docker compose.
cmds:
- docker compose pull
update:
desc: Pull new Docker images corresponding to the Quilibrium containers, then restart all containers.
cmds:
- task: pull
- task: down
- task: up
shell:
desc: Drop into a shell inside the running container.
cmds:
- docker compose exec -it {{.SERVICE_NAME}} sh
logs:
desc: Print the logs of the running Quilibrium container.
cmds:
- docker compose logs -f
logs-folder:
desc: Show where Docker stores the logs for the Quilibrium node. You need root permissions to access the folder.
cmds:
- "docker container inspect {{.PROJECT_NAME}}-{{.SERVICE_NAME}}-1 | grep LogPath | cut -d : -f 2 | cut -d '\"' -f 2 | xargs dirname"
node-info:
desc: Displays node related info for a running node.
cmds:
- docker compose exec node node -node-info
backup:
desc: Create a backup file with the critical configuration files.
prompt: You will be prompted for root access. Make sure you verify the generated backup file. Continue?
preconditions:
- sh: 'test -d .config'
msg: '.config does not exists!'
- sh: 'test -f .config/config.yml'
msg: '.config/config.yml does not exists!'
- sh: 'test -f .config/keys.yml'
msg: '.config/keys.yml does not exists!'
- sh: '! test -f backup.tar.gz'
msg: 'A previous backup.tar.gz found in the current folder!'
sources:
- '.config/config.yml'
- '.config/keys.yml'
generates:
- 'backup.tar.gz'
cmds:
- |
export TMP_DIR=$(mktemp -d)
export TASK_DIR=$(pwd)
sudo cp .config/config.yml $TMP_DIR
sudo cp .config/keys.yml $TMP_DIR
sudo chown $(whoami):$(id -gn) $TMP_DIR/*
cd $TMP_DIR
tar -czf $TASK_DIR/backup.tar.gz *
cd $TASK_DIR
sudo rm -rf $TMP_DIR
echo "Backup saved to: backup.tar.gz"
echo "Do not assume you have a backup unless you verify it!!!"
silent: true
restore:
desc: Restores a backup file with the critical configuration files.
preconditions:
- sh: '! test -d .config'
msg: '.config already exists, restore cannot be performed safely!'
- sh: 'test -f backup.tar.gz'
msg: 'backup.tar.gz not found in the current folder!'
sources:
- 'backup.tar.gz'
generates:
- '.config/config.yml'
- '.config/keys.yml'
cmds:
- |
mkdir .config
tar -xzf backup.tar.gz -C .config
echo "Backup restored from: backup.tar.gz"
silent: true
test:port:
desc: Test if the P2P port is visible to the world.
preconditions:
- sh: 'test -x "$(command -v nc)"'
msg: 'nc is not installed, install with "sudo apt install netcat"'
- sh: 'test -n "$NODE_PUBLIC_NAME"'
msg: 'The public DNS name or IP address of the server must be set in NODE_PUBLIC_NAME.'
cmds:
- 'nc -vzu ${NODE_PUBLIC_NAME} ${QUILIBRIUM_P2P_PORT:=8336}'

View File

@ -6,7 +6,7 @@ import (
) )
func GetMinimumVersionCutoff() time.Time { func GetMinimumVersionCutoff() time.Time {
return time.Date(2024, time.May, 24, 4, 0, 0, 0, time.UTC) return time.Date(2024, time.May, 28, 3, 0, 0, 0, time.UTC)
} }
func GetMinimumVersion() []byte { func GetMinimumVersion() []byte {
@ -27,3 +27,7 @@ func FormatVersion(version []byte) string {
version[0], version[1], version[2], version[0], version[1], version[2],
) )
} }
func GetPatchNumber() byte {
return 0x01
}

View File

@ -132,6 +132,8 @@ func (e *MasterClockConsensusEngine) handleSelfTestReport(
e.logger.Warn( e.logger.Warn(
"received invalid proof from peer", "received invalid proof from peer",
zap.String("peer_id", peer.ID(peerID).String()), zap.String("peer_id", peer.ID(peerID).String()),
zap.Int("proof_size", len(report.Proof)),
zap.Uint32("cores", report.Cores),
) )
e.pubSub.SetPeerScore(peerID, -1000) e.pubSub.SetPeerScore(peerID, -1000)
return errors.Wrap(errors.New("invalid report"), "handle self test report") return errors.Wrap(errors.New("invalid report"), "handle self test report")
@ -148,6 +150,7 @@ func (e *MasterClockConsensusEngine) handleSelfTestReport(
return nil return nil
} }
info.DifficultyMetric = report.DifficultyMetric
info.MasterHeadFrame = report.MasterHeadFrame info.MasterHeadFrame = report.MasterHeadFrame
if info.Bandwidth <= 1048576 { if info.Bandwidth <= 1048576 {
@ -169,7 +172,8 @@ func (e *MasterClockConsensusEngine) handleSelfTestReport(
timestamp := binary.BigEndian.Uint64(proof[:8]) timestamp := binary.BigEndian.Uint64(proof[:8])
proof = proof[8:] proof = proof[8:]
// Ignore outdated reports, give 3 minutes for propagation delay // Ignore outdated reports, give 3 minutes + proof time for propagation
// delay
if int64(timestamp) < (time.Now().UnixMilli() - (480 * 1000)) { if int64(timestamp) < (time.Now().UnixMilli() - (480 * 1000)) {
return nil return nil
} }
@ -181,25 +185,16 @@ func (e *MasterClockConsensusEngine) handleSelfTestReport(
for i := 0; i < len(proofs); i++ { for i := 0; i < len(proofs); i++ {
proofs[i] = proof[i*516 : (i+1)*516] proofs[i] = proof[i*516 : (i+1)*516]
} }
if !e.frameProver.VerifyChallengeProof( go func() {
challenge, e.verifyTestCh <- verifyChallenge{
int64(timestamp), peerID: peerID,
report.DifficultyMetric, challenge: challenge,
proofs, timestamp: int64(timestamp),
) { difficultyMetric: report.DifficultyMetric,
e.logger.Warn( proofs: proofs,
"received invalid proof from peer",
zap.String("peer_id", peer.ID(peerID).String()),
)
e.pubSub.SetPeerScore(peerID, -1000)
return errors.Wrap(
errors.New("invalid report"),
"handle self test report",
)
} }
}()
info.LastSeen = time.Now().UnixMilli()
return nil return nil
} }
@ -264,6 +259,7 @@ func (e *MasterClockConsensusEngine) handleSelfTestReport(
return nil return nil
} }
// This does not publish any longer, frames strictly are picked up from sync
func (e *MasterClockConsensusEngine) publishProof( func (e *MasterClockConsensusEngine) publishProof(
frame *protobufs.ClockFrame, frame *protobufs.ClockFrame,
) error { ) error {
@ -274,17 +270,6 @@ func (e *MasterClockConsensusEngine) publishProof(
e.masterTimeReel.Insert(frame, false) e.masterTimeReel.Insert(frame, false)
peers, err := e.GetMostAheadPeers()
if err != nil || len(peers) == 0 {
// publish if we don't see anyone (empty peer list) or if we're the most
// ahead:
e.report.MasterHeadFrame = frame.FrameNumber
if err := e.publishMessage(e.filter, e.report); err != nil {
e.logger.Debug("error publishing message", zap.Error(err))
}
}
e.state = consensus.EngineStateCollecting e.state = consensus.EngineStateCollecting
return nil return nil

View File

@ -3,6 +3,7 @@ package master
import ( import (
"bytes" "bytes"
"context" "context"
gcrypto "crypto"
"crypto/rand" "crypto/rand"
"encoding/binary" "encoding/binary"
"encoding/hex" "encoding/hex"
@ -11,6 +12,8 @@ import (
"sync" "sync"
"time" "time"
"github.com/iden3/go-iden3-crypto/poseidon"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/mr-tron/base58" "github.com/mr-tron/base58"
"github.com/pkg/errors" "github.com/pkg/errors"
"go.uber.org/zap" "go.uber.org/zap"
@ -62,6 +65,7 @@ type MasterClockConsensusEngine struct {
report *protobufs.SelfTestReport report *protobufs.SelfTestReport
frameValidationCh chan *protobufs.ClockFrame frameValidationCh chan *protobufs.ClockFrame
bandwidthTestCh chan []byte bandwidthTestCh chan []byte
verifyTestCh chan verifyChallenge
currentReceivingSyncPeers int currentReceivingSyncPeers int
currentReceivingSyncPeersMx sync.Mutex currentReceivingSyncPeersMx sync.Mutex
} }
@ -126,6 +130,7 @@ func NewMasterClockConsensusEngine(
report: report, report: report,
frameValidationCh: make(chan *protobufs.ClockFrame), frameValidationCh: make(chan *protobufs.ClockFrame),
bandwidthTestCh: make(chan []byte), bandwidthTestCh: make(chan []byte),
verifyTestCh: make(chan verifyChallenge),
} }
e.addPeerManifestReport(e.pubSub.GetPeerID(), report) e.addPeerManifestReport(e.pubSub.GetPeerID(), report)
@ -134,6 +139,12 @@ func NewMasterClockConsensusEngine(
panic(errors.Wrap(err, "could not parse filter value")) panic(errors.Wrap(err, "could not parse filter value"))
} }
e.getProvingKey(engineConfig)
if err := e.createCommunicationKeys(); err != nil {
panic(err)
}
logger.Info("constructing consensus engine") logger.Info("constructing consensus engine")
return e return e
@ -170,7 +181,8 @@ func (e *MasterClockConsensusEngine) Start() <-chan error {
panic(err) panic(err)
} }
if head.FrameNumber > newFrame.FrameNumber || newFrame.FrameNumber-head.FrameNumber > 128 { if head.FrameNumber > newFrame.FrameNumber ||
newFrame.FrameNumber-head.FrameNumber > 128 {
e.logger.Debug( e.logger.Debug(
"frame out of range, ignoring", "frame out of range, ignoring",
zap.Uint64("number", newFrame.FrameNumber), zap.Uint64("number", newFrame.FrameNumber),
@ -186,6 +198,8 @@ func (e *MasterClockConsensusEngine) Start() <-chan error {
e.masterTimeReel.Insert(newFrame, false) e.masterTimeReel.Insert(newFrame, false)
case peerId := <-e.bandwidthTestCh: case peerId := <-e.bandwidthTestCh:
e.performBandwidthTest(peerId) e.performBandwidthTest(peerId)
case verifyTest := <-e.verifyTestCh:
e.performVerifyTest(verifyTest)
} }
} }
}() }()
@ -225,6 +239,8 @@ func (e *MasterClockConsensusEngine) Start() <-chan error {
go func() { go func() {
// Let it sit until we at least have a few more peers inbound // Let it sit until we at least have a few more peers inbound
time.Sleep(30 * time.Second) time.Sleep(30 * time.Second)
difficultyMetric := int64(100000)
skew := (difficultyMetric * 12) / 10
for { for {
head, err := e.masterTimeReel.Head() head, err := e.masterTimeReel.Head()
@ -233,15 +249,17 @@ func (e *MasterClockConsensusEngine) Start() <-chan error {
} }
e.report.MasterHeadFrame = head.FrameNumber e.report.MasterHeadFrame = head.FrameNumber
e.report.DifficultyMetric = difficultyMetric
parallelism := e.report.Cores - 1 parallelism := e.report.Cores - 1
skew := (e.report.DifficultyMetric * 12) / 10
challenge := binary.BigEndian.AppendUint64( challenge := binary.BigEndian.AppendUint64(
[]byte{}, []byte{},
e.report.MasterHeadFrame, e.report.MasterHeadFrame,
) )
challenge = append(challenge, e.pubSub.GetPeerID()...) challenge = append(challenge, e.pubSub.GetPeerID()...)
ts, proofs, err := e.frameProver.CalculateChallengeProof( ts, proofs, nextDifficultyMetric, err :=
e.frameProver.CalculateChallengeProof(
challenge, challenge,
parallelism, parallelism,
skew, skew,
@ -249,6 +267,13 @@ func (e *MasterClockConsensusEngine) Start() <-chan error {
if err != nil { if err != nil {
panic(err) panic(err)
} }
e.logger.Info(
"recalibrating difficulty metric",
zap.Int64("previous_difficulty_metric", difficultyMetric),
zap.Int64("next_difficulty_metric", nextDifficultyMetric),
)
difficultyMetric = nextDifficultyMetric
skew = (nextDifficultyMetric * 12) / 10
proof := binary.BigEndian.AppendUint64([]byte{}, uint64(ts)) proof := binary.BigEndian.AppendUint64([]byte{}, uint64(ts))
for i := 0; i < len(proofs); i++ { for i := 0; i < len(proofs); i++ {
@ -355,6 +380,38 @@ func (e *MasterClockConsensusEngine) Stop(force bool) <-chan error {
return errChan return errChan
} }
type verifyChallenge struct {
peerID []byte
challenge []byte
timestamp int64
difficultyMetric int64
proofs [][]byte
}
func (e *MasterClockConsensusEngine) performVerifyTest(
challenge verifyChallenge,
) {
if !e.frameProver.VerifyChallengeProof(
challenge.challenge,
challenge.timestamp,
challenge.difficultyMetric,
challenge.proofs,
) {
e.logger.Warn(
"received invalid proof from peer",
zap.String("peer_id", peer.ID(challenge.peerID).String()),
)
e.pubSub.SetPeerScore(challenge.peerID, -1000)
} else {
e.logger.Debug(
"received valid proof from peer",
zap.String("peer_id", peer.ID(challenge.peerID).String()),
)
info := e.peerInfoManager.GetPeerInfo(challenge.peerID)
info.LastSeen = time.Now().UnixMilli()
}
}
func (e *MasterClockConsensusEngine) performBandwidthTest(peerID []byte) { func (e *MasterClockConsensusEngine) performBandwidthTest(peerID []byte) {
result := e.pubSub.GetMultiaddrOfPeer(peerID) result := e.pubSub.GetMultiaddrOfPeer(peerID)
if result == "" { if result == "" {
@ -606,3 +663,77 @@ func (e *MasterClockConsensusEngine) addPeerManifestReport(
e.peerInfoManager.AddPeerInfo(manifest) e.peerInfoManager.AddPeerInfo(manifest)
} }
func (e *MasterClockConsensusEngine) getProvingKey(
engineConfig *config.EngineConfig,
) (gcrypto.Signer, keys.KeyType, []byte, []byte) {
provingKey, err := e.keyManager.GetSigningKey(engineConfig.ProvingKeyId)
if errors.Is(err, keys.KeyNotFoundErr) {
e.logger.Info("could not get proving key, generating")
provingKey, err = e.keyManager.CreateSigningKey(
engineConfig.ProvingKeyId,
keys.KeyTypeEd448,
)
}
if err != nil {
e.logger.Error("could not get proving key", zap.Error(err))
panic(err)
}
rawKey, err := e.keyManager.GetRawKey(engineConfig.ProvingKeyId)
if err != nil {
e.logger.Error("could not get proving key type", zap.Error(err))
panic(err)
}
provingKeyType := rawKey.Type
h, err := poseidon.HashBytes(rawKey.PublicKey)
if err != nil {
e.logger.Error("could not hash proving key", zap.Error(err))
panic(err)
}
provingKeyAddress := h.Bytes()
provingKeyAddress = append(
make([]byte, 32-len(provingKeyAddress)),
provingKeyAddress...,
)
return provingKey, provingKeyType, rawKey.PublicKey, provingKeyAddress
}
func (e *MasterClockConsensusEngine) createCommunicationKeys() error {
_, err := e.keyManager.GetAgreementKey("q-ratchet-idk")
if err != nil {
if errors.Is(err, keys.KeyNotFoundErr) {
_, err = e.keyManager.CreateAgreementKey(
"q-ratchet-idk",
keys.KeyTypeX448,
)
if err != nil {
return errors.Wrap(err, "create communication keys")
}
} else {
return errors.Wrap(err, "create communication keys")
}
}
_, err = e.keyManager.GetAgreementKey("q-ratchet-spk")
if err != nil {
if errors.Is(err, keys.KeyNotFoundErr) {
_, err = e.keyManager.CreateAgreementKey(
"q-ratchet-spk",
keys.KeyTypeX448,
)
if err != nil {
return errors.Wrap(err, "create communication keys")
}
} else {
return errors.Wrap(err, "create communication keys")
}
}
return nil
}

View File

@ -55,7 +55,7 @@ type FrameProver interface {
challenge []byte, challenge []byte,
parallelism uint32, parallelism uint32,
skew int64, skew int64,
) (int64, [][]byte, error) ) (int64, [][]byte, int64, error)
VerifyChallengeProof( VerifyChallengeProof(
challenge []byte, challenge []byte,
timestamp int64, timestamp int64,

View File

@ -15,6 +15,7 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
"golang.org/x/crypto/sha3" "golang.org/x/crypto/sha3"
"source.quilibrium.com/quilibrium/monorepo/nekryptology/pkg/vdf" "source.quilibrium.com/quilibrium/monorepo/nekryptology/pkg/vdf"
"source.quilibrium.com/quilibrium/monorepo/node/config"
"source.quilibrium.com/quilibrium/monorepo/node/keys" "source.quilibrium.com/quilibrium/monorepo/node/keys"
"source.quilibrium.com/quilibrium/monorepo/node/protobufs" "source.quilibrium.com/quilibrium/monorepo/node/protobufs"
"source.quilibrium.com/quilibrium/monorepo/node/tries" "source.quilibrium.com/quilibrium/monorepo/node/tries"
@ -549,7 +550,9 @@ func (w *WesolowskiFrameProver) VerifyWeakRecursiveProof(
} }
filter := proof[:len(frame.Filter)] filter := proof[:len(frame.Filter)]
check := binary.BigEndian.Uint64(proof[len(frame.Filter) : len(frame.Filter)+8]) check := binary.BigEndian.Uint64(
proof[len(frame.Filter) : len(frame.Filter)+8],
)
timestamp := binary.BigEndian.Uint64( timestamp := binary.BigEndian.Uint64(
proof[len(frame.Filter)+8 : len(frame.Filter)+16], proof[len(frame.Filter)+8 : len(frame.Filter)+16],
) )
@ -600,26 +603,25 @@ func (w *WesolowskiFrameProver) CalculateChallengeProof(
challenge []byte, challenge []byte,
parallelism uint32, parallelism uint32,
skew int64, skew int64,
) (int64, [][]byte, error) { ) (int64, [][]byte, int64, error) {
now := time.Now().UnixMilli() now := time.Now()
input := binary.BigEndian.AppendUint64([]byte{}, uint64(now)) nowMs := now.UnixMilli()
input := binary.BigEndian.AppendUint64([]byte{}, uint64(nowMs))
input = append(input, challenge...) input = append(input, challenge...)
outputs := make([][]byte, parallelism) outputs := make([][]byte, parallelism)
wg := sync.WaitGroup{} wg := sync.WaitGroup{}
wg.Add(int(parallelism)) wg.Add(int(parallelism))
// 4.5 minutes = 270 seconds, one increment should be ten seconds
proofDuration := 270 * 1000
calibratedDifficulty := (int64(proofDuration) * 10000) / skew
for i := uint32(0); i < parallelism; i++ { for i := uint32(0); i < parallelism; i++ {
i := i i := i
go func() { go func() {
instanceInput := binary.BigEndian.AppendUint32([]byte{}, i) instanceInput := binary.BigEndian.AppendUint32([]byte{}, i)
instanceInput = append(instanceInput, input...) instanceInput = append(instanceInput, input...)
b := sha3.Sum256(input) b := sha3.Sum256(instanceInput)
// 4.5 minutes = 270 seconds, one increment should be ten seconds
proofDuration := 270 * 1000
calibratedDifficulty := (int64(proofDuration) / skew) * 10000
v := vdf.New(uint32(calibratedDifficulty), b) v := vdf.New(uint32(calibratedDifficulty), b)
v.Execute() v.Execute()
@ -632,7 +634,10 @@ func (w *WesolowskiFrameProver) CalculateChallengeProof(
} }
wg.Wait() wg.Wait()
return now, outputs, nil after := time.Since(now)
nextSkew := (skew * after.Milliseconds()) / int64(proofDuration)
return nowMs, outputs, nextSkew, nil
} }
func (w *WesolowskiFrameProver) VerifyChallengeProof( func (w *WesolowskiFrameProver) VerifyChallengeProof(
@ -644,6 +649,10 @@ func (w *WesolowskiFrameProver) VerifyChallengeProof(
input := binary.BigEndian.AppendUint64([]byte{}, uint64(timestamp)) input := binary.BigEndian.AppendUint64([]byte{}, uint64(timestamp))
input = append(input, challenge...) input = append(input, challenge...)
if assertedDifficulty < 1 {
return false
}
for i := uint32(0); i < uint32(len(proof)); i++ { for i := uint32(0); i < uint32(len(proof)); i++ {
if len(proof[i]) != 516 { if len(proof[i]) != 516 {
return false return false
@ -651,18 +660,29 @@ func (w *WesolowskiFrameProver) VerifyChallengeProof(
instanceInput := binary.BigEndian.AppendUint32([]byte{}, i) instanceInput := binary.BigEndian.AppendUint32([]byte{}, i)
instanceInput = append(instanceInput, input...) instanceInput = append(instanceInput, input...)
b := sha3.Sum256(input) b := sha3.Sum256(instanceInput)
// 4.5 minutes = 270 seconds, one increment should be ten seconds // 4.5 minutes = 270 seconds, one increment should be ten seconds
proofDuration := 270 * 1000 proofDuration := 270 * 1000
skew := (assertedDifficulty * 12) / 10 skew := (assertedDifficulty * 12) / 10
calibratedDifficulty := (int64(proofDuration) / skew) * 10000 calibratedDifficulty := (int64(proofDuration) * 10000) / skew
v := vdf.New(uint32(calibratedDifficulty), b) v := vdf.New(uint32(calibratedDifficulty), b)
check := v.Verify([516]byte(proof[i])) check := v.Verify([516]byte(proof[i]))
if !check {
// TODO: Remove after 2024-05-28
if time.Now().Before(config.GetMinimumVersionCutoff()) {
calibratedDifficulty = (int64(proofDuration) / skew) * 10000
v = vdf.New(uint32(calibratedDifficulty), sha3.Sum256(input))
check = v.Verify([516]byte(proof[i]))
if !check { if !check {
return false return false
} }
} else {
return false
}
}
} }
return true return true

View File

@ -30,7 +30,10 @@ func TestMasterProve(t *testing.T) {
func TestChallengeProof(t *testing.T) { func TestChallengeProof(t *testing.T) {
l, _ := zap.NewProduction() l, _ := zap.NewProduction()
w := crypto.NewWesolowskiFrameProver(l) w := crypto.NewWesolowskiFrameProver(l)
now, proofs, err := w.CalculateChallengeProof([]byte{0x01, 0x02, 0x03}, 3, 120000) now, proofs, nextSkew, err := w.CalculateChallengeProof([]byte{0x01, 0x02, 0x03}, 3, 120000)
assert.NoError(t, err) assert.NoError(t, err)
assert.True(t, w.VerifyChallengeProof([]byte{0x01, 0x02, 0x03}, now, 100000, proofs)) assert.True(t, w.VerifyChallengeProof([]byte{0x01, 0x02, 0x03}, now, 100000, proofs))
now, proofs, _, err = w.CalculateChallengeProof([]byte{0x01, 0x02, 0x03}, 3, nextSkew*12/10)
assert.NoError(t, err)
assert.True(t, w.VerifyChallengeProof([]byte{0x01, 0x02, 0x03}, now, nextSkew, proofs))
} }

View File

@ -192,7 +192,7 @@ func main() {
os.Exit(1) os.Exit(1)
} }
fmt.Printf("Signature check passed") fmt.Println("Signature check passed")
} }
} }
@ -735,6 +735,11 @@ func printLogo() {
} }
func printVersion() { func printVersion() {
patch := config.GetPatchNumber()
patchString := ""
if patch != 0x00 {
patchString = fmt.Sprintf("-p%d", patch)
}
fmt.Println(" ") fmt.Println(" ")
fmt.Println(" Quilibrium Node - v" + config.GetVersionString() + " Nebula") fmt.Println(" Quilibrium Node - v" + config.GetVersionString() + patchString + " Nebula")
} }

View File

@ -1 +1 @@
SHA3-256(node-1.4.18-darwin-arm64)= aee64d1d18c8e5567016d51460cf882005c4a873dbebcd7d608b8d3d9e74c682 SHA3-256(node-1.4.18-darwin-arm64)= dc14a02268d88540bb364259775743c536d7541011bf26d4630f7fed425b5986

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1 +1 @@
SHA3-256(node-1.4.18-linux-amd64)= e24acbaab0dca79a26c1ac80561eb2dc69abf381fff73b5bb4092084143ba2c9 SHA3-256(node-1.4.18-linux-amd64)= e41bf8538990e201637521b0eb278a1aebfc46e5d8de102f4870de30616b44e6

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1 +1 @@
SHA3-256(node-1.4.18-linux-arm64)= 888228bb59dc0b7fc103bd886906cfecdf3877dcedc138a1fd4cf694ab527409 SHA3-256(node-1.4.18-linux-arm64)= de488e85acfd5ced235c8b7994d37cb153251826a52d1b0e964034b80d65b478

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -5,6 +5,7 @@ import (
"context" "context"
"math/big" "math/big"
"net/http" "net/http"
"source.quilibrium.com/quilibrium/monorepo/node/config" "source.quilibrium.com/quilibrium/monorepo/node/config"
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
@ -154,19 +155,14 @@ func (r *RPCServer) GetNodeInfo(
if err != nil { if err != nil {
return nil, errors.Wrap(err, "getting id from bytes") return nil, errors.Wrap(err, "getting id from bytes")
} }
maxFrame := &protobufs.ClockFrame{}
for _, e := range r.executionEngines {
if frame := e.GetFrame(); frame != nil {
if frameNr := frame.GetFrameNumber(); frameNr > maxFrame.GetFrameNumber() {
maxFrame = frame
}
}
}
peerScore := r.pubSub.GetPeerScore(r.pubSub.GetPeerID()) peerScore := r.pubSub.GetPeerScore(r.pubSub.GetPeerID())
return &protobufs.NodeInfoResponse{PeerId: peerID.String(), MaxFrame: maxFrame.GetFrameNumber(), PeerScore: uint64(peerScore), Version: config.GetVersion()}, nil return &protobufs.NodeInfoResponse{
PeerId: peerID.String(),
MaxFrame: r.masterClock.GetFrame().GetFrameNumber(),
PeerScore: uint64(peerScore),
Version: config.GetVersion(),
}, nil
} }
// GetPeerInfo implements protobufs.NodeServiceServer. // GetPeerInfo implements protobufs.NodeServiceServer.
@ -175,13 +171,23 @@ func (r *RPCServer) GetPeerInfo(
req *protobufs.GetPeerInfoRequest, req *protobufs.GetPeerInfoRequest,
) (*protobufs.PeerInfoResponse, error) { ) (*protobufs.PeerInfoResponse, error) {
resp := &protobufs.PeerInfoResponse{} resp := &protobufs.PeerInfoResponse{}
for _, e := range r.executionEngines { manifests := r.masterClock.GetPeerManifests()
r := e.GetPeerInfo() for _, m := range manifests.PeerManifests {
resp.PeerInfo = append(resp.PeerInfo, r.PeerInfo...) multiaddr := r.pubSub.GetMultiaddrOfPeer(m.PeerId)
resp.UncooperativePeerInfo = append( addrs := []string{}
resp.UncooperativePeerInfo, if multiaddr != "" {
r.UncooperativePeerInfo..., addrs = append(addrs, multiaddr)
) }
resp.PeerInfo = append(resp.PeerInfo, &protobufs.PeerInfo{
PeerId: m.PeerId,
Multiaddrs: addrs,
MaxFrame: m.MasterHeadFrame,
Timestamp: m.LastSeen,
// We can get away with this for this release only, we will want to add
// version info in manifests.
Version: config.GetVersion(),
})
} }
return resp, nil return resp, nil
} }

View File

@ -83,24 +83,28 @@ tasks:
sources: sources:
- '**/*.go' - '**/*.go'
generates: generates:
- node-{{.VERSION}}-*.bin - node-{{.VERSION}}-darwin-arm64
- node-{{.VERSION}}-linux-amd64
- node-{{.VERSION}}-linux-arm64
cmds: cmds:
- GOOS=darwin go build -ldflags "-s -w" -o node-{{.VERSION}}-darwin-arm64.bin - GOOS=darwin go build -ldflags "-s -w" -o node-{{.VERSION}}-darwin-arm64
- GOOS=linux GOARCH=amd64 go build -ldflags "-s -w" -o node-{{.VERSION}}-linux-amd64.bin - GOOS=linux GOARCH=amd64 go build -ldflags "-s -w" -o node-{{.VERSION}}-linux-amd64
- GOOS=linux GOARCH=arm64 go build -ldflags "-s -w" -o node-{{.VERSION}}-linux-arm64.bin - GOOS=linux GOARCH=arm64 go build -ldflags "-s -w" -o node-{{.VERSION}}-linux-arm64
digest: digest:
desc: Generate digests for node binaries. desc: Generate digests for node binaries.
deps: [build] deps: [build]
dir: ../node dir: ../node
sources: sources:
- node-{{.VERSION}}-*.bin - node-{{.VERSION}}-darwin-arm64
- node-{{.VERSION}}-linux-amd64
- node-{{.VERSION}}-linux-arm64
generates: generates:
- node-{{.VERSION}}-*.dgst - node-{{.VERSION}}-*.dgst
cmds: cmds:
- openssl sha3-256 -out node-{{.VERSION}}-darwin-arm64.dgst node-{{.VERSION}}-darwin-arm64.bin - openssl sha3-256 -out node-{{.VERSION}}-darwin-arm64.dgst node-{{.VERSION}}-darwin-arm64
- openssl sha3-256 -out node-{{.VERSION}}-linux-amd64.dgst node-{{.VERSION}}-linux-amd64.bin - openssl sha3-256 -out node-{{.VERSION}}-linux-amd64.dgst node-{{.VERSION}}-linux-amd64
- openssl sha3-256 -out node-{{.VERSION}}-linux-arm64.dgst node-{{.VERSION}}-linux-arm64.bin - openssl sha3-256 -out node-{{.VERSION}}-linux-arm64.dgst node-{{.VERSION}}-linux-arm64
sign: sign:
desc: Generate signatures for node binaries. desc: Generate signatures for node binaries.
@ -130,9 +134,9 @@ tasks:
- docker:build_image - docker:build_image
cmds: cmds:
- docker run --name signers --rm -it -v {{.PARENT_FOLDER}}:/home/{{.USER_NAME}}/ceremonyclient -u {{.USER_NAME}} -w /home/{{.USER_NAME}}/ceremonyclient/signers {{.QUILIBRIUM_SIGNERS_IMAGE_NAME}} task verify:build:container - docker run --name signers --rm -it -v {{.PARENT_FOLDER}}:/home/{{.USER_NAME}}/ceremonyclient -u {{.USER_NAME}} -w /home/{{.USER_NAME}}/ceremonyclient/signers {{.QUILIBRIUM_SIGNERS_IMAGE_NAME}} task verify:build:container
- diff node-{{.VERSION}}-darwin-arm64.bin node-tmp-darwin-arm64.bin - diff node-{{.VERSION}}-darwin-arm64 node-tmp-darwin-arm64
- diff node-{{.VERSION}}-linux-amd64.bin node-tmp-linux-amd64.bin - diff node-{{.VERSION}}-linux-amd64 node-tmp-linux-amd64
- diff node-{{.VERSION}}-linux-arm64.bin node-tmp-linux-arm64.bin - diff node-{{.VERSION}}-linux-arm64 node-tmp-linux-arm64
verify:build:container: verify:build:container:
desc: Verify that the existing binaries can be rebuilt exactly the same, inside tbe Docker container. desc: Verify that the existing binaries can be rebuilt exactly the same, inside tbe Docker container.
@ -140,22 +144,24 @@ tasks:
sources: sources:
- '**/*.go' - '**/*.go'
generates: generates:
- node-tmp-*.bin - node-tmp-darwin-arm64
- node-tmp-linux-amd64
- node-tmp-linux-arm64
cmds: cmds:
- GOOS=darwin go build -ldflags "-s -w" -o node-tmp-darwin-arm64.bin - GOOS=darwin go build -ldflags "-s -w" -o node-tmp-darwin-arm64
- GOOS=linux GOARCH=amd64 go build -ldflags "-s -w" -o node-tmp-linux-amd64.bin - GOOS=linux GOARCH=amd64 go build -ldflags "-s -w" -o node-tmp-linux-amd64
- GOOS=linux GOARCH=arm64 go build -ldflags "-s -w" -o node-tmp-linux-arm64.bin - GOOS=linux GOARCH=arm64 go build -ldflags "-s -w" -o node-tmp-linux-arm64
- diff node-{{.VERSION}}-darwin-arm64.bin node-tmp-darwin-arm64.bin - diff node-{{.VERSION}}-darwin-arm64 node-tmp-darwin-arm64
- diff node-{{.VERSION}}-linux-amd64.bin node-tmp-linux-amd64.bin - diff node-{{.VERSION}}-linux-amd64 node-tmp-linux-amd64
- diff node-{{.VERSION}}-linux-arm64.bin node-tmp-linux-arm64.bin - diff node-{{.VERSION}}-linux-arm64 node-tmp-linux-arm64
verify:digest: verify:digest:
desc: Verify that the existing digests are correct. desc: Verify that the existing digests are correct.
dir: ../node dir: ../node
cmds: cmds:
- openssl sha3-256 -out node-tmp-darwin-arm64.dgst node-{{.VERSION}}-darwin-arm64.bin - openssl sha3-256 -out node-tmp-darwin-arm64.dgst node-{{.VERSION}}-darwin-arm64
- openssl sha3-256 -out node-tmp-linux-amd64.dgst node-{{.VERSION}}-linux-amd64.bin - openssl sha3-256 -out node-tmp-linux-amd64.dgst node-{{.VERSION}}-linux-amd64
- openssl sha3-256 -out node-tmp-linux-arm64.dgst node-{{.VERSION}}-linux-arm64.bin - openssl sha3-256 -out node-tmp-linux-arm64.dgst node-{{.VERSION}}-linux-arm64
- diff node-{{.VERSION}}-darwin-arm64.dgst node-tmp-darwin-arm64.dgst - diff node-{{.VERSION}}-darwin-arm64.dgst node-tmp-darwin-arm64.dgst
- diff node-{{.VERSION}}-linux-amd64.dgst node-tmp-linux-amd64.dgst - diff node-{{.VERSION}}-linux-amd64.dgst node-tmp-linux-amd64.dgst
- diff node-{{.VERSION}}-linux-arm64.dgst node-tmp-linux-arm64.dgst - diff node-{{.VERSION}}-linux-arm64.dgst node-tmp-linux-arm64.dgst

4
signers/pems/16.pem Normal file
View File

@ -0,0 +1,4 @@
-----BEGIN PUBLIC KEY-----
MEMwBQYDK2VxAzoAbihy9zxIaMQoa+97/i9UeaQcQvTgdQXvpIg8eVDHQCUuDup4
7vEMWEsZsdzaAfd2fTE10HwzJEEA
-----END PUBLIC KEY-----

4
signers/pems/17.pem Normal file
View File

@ -0,0 +1,4 @@
-----BEGIN PUBLIC KEY-----
MEMwBQYDK2VxAzoAoRSwYfjTXj80l8jEPYO6a0r2eqezm3Q7Gwo18tZhELUFHdPY
b2m1cSKjW2TmJLgYC+5jthUvzkKA
-----END PUBLIC KEY-----