The Nimbus book
This book focuses on our consensus layer client. If you're eager to get started, check out our quickstart guide.
Nimbus is a client implementation for both the consensus layer
(eth2) and execution layer
(eth1) that strives to be as lightweight as possible in terms of resources used. This allows it to perform well on embedded systems, resource-restricted devices -- including Raspberry Pis and mobile devices.
However, resource-restricted hardware is not the only thing Nimbus is good for. Its low resource consumption makes it easy to run Nimbus together with other workloads on your server (this is especially valuable for stakers looking to lower the cost of their server instances).
"just because it [Nimbus] is optimized to be minimally resource intensive, doesn't mean you can't run it on a server. It means that when you do run it on a server, it is consuming a lot less resources." https://t.co/F2sdZouBtD
— Nimbus (@ethnimbus) March 30, 2021
This book explains the ways in which you can use Nimbus to either monitor the eth2 chain or become a fully-fledged validator.
N.B. The reality is that we are very early in the eth2 validating life cycle. Validating is not for everyone yet, and it comes with both risks and responsibilities. It isn't a particularly easy way to make money. You'll need to put effort into updating your software, researching hard-forks, having a robust setup... . As such, you should only stake if you are genuinely interested in securing the protocol.
Helpful resources
Get in touch
Need help with anything? Join us on Status and Discord.
Donate
If you'd like to contribute to Nimbus development, our donation address is 0x70E47C843E0F6ab0991A3189c28F2957eb6d3842
Stay updated
Subscribe to our newsletter here.
Disclaimer
This documentation assumes Nimbus is in its ideal state. The project is still under active development. Please submit a Github issue if you come across a problem.
Design goals
One of our most important design goals is an application architecture that makes it simple to embed Nimbus into other software.
Another is to minimize reliance on third-party software.
A third is for the application binary to be as lightweight as possible in terms of resources used.
Integration with Status
I can't wait to run Nimbus straight from Status Desktop #hyped
— JARRAÐ HOPΞ (@jarradhope) August 12, 2020
As part of our first design goal, our primary objective here is for Nimbus to be tightly integrated into the Status messaging app.
Our dream is for you to be able to run and monitor your validator straight from Status desktop.
System requirements (recommended)
Operating System: Linux 64-bit, Windows 64-bit, macOS X 10.14+,
Memory: 4GB RAM
Storage: 200GB SSD
Internet: Reliable broadband connection
In order to process incoming validator deposits from the eth1 chain, you will need to run an eth1 client in parallel to your eth2 client. While it is possible to use a third-party service like Infura, if you choose to run your own eth1 client locally, you'll need more memory and storage.
For example, you'll need at least another 1TB SSD to run geth fast sync on mainnet.
To future proof your setup we recommend a 2TB SSD.
Mainnet checklist
Latest software
Please check that you are running the latest stable Nimbus software release.
In order to stay on top of new releases you should subscribe to our mailing list.
More than 15 peers
Please check that your node has at least 15 peers. To monitor your peer count, pay attention to the Slot start
messages in your logs. See the networking page for more tips.
Validator attached
Please check that your validator is attached to your node.
Systemd
Now that you have Nimbus up and running, we recommend setting up a systemd service with an autorestart on boot (should you experience an unexpected power outage, this will ensure your validator restarts correctly).
Systemd will also ensure your validator keeps running when you exit your ssh session (Ctrl-C
) and/or switch off your laptop.
Ethereum Foundation's Checklist
Ad a final check, we recommend you also go through the EF'S staker checklist.
Run the beacon node
This page takes you through how to run just the beacon node without a validator attached.
The beacon node connects to the eth2 network, manages the blockchain, and provides API's to interact with the beacon chain.
Running a beacon node without a validator attached is a worthwhile endeavor.
It's also a necessary step to running a validator (since an active validator requires a synced beacon node).
1. Install
2. Build
Build the beacon node or install a precompiled release from the Nimbus eth2 releases page.
3. Sync
Run a validator
Once your beacon node is running and synced, the next step is to run a validator.
1. Deposit
Make a deposit for your validator
2. Import
Import your validator keys into Nimbus
3. Connect
Connect your validator to eth2
While that's all there is to it, it is essential that you both keep an eye on your validator and keep Nimbus updated regularly 💫
Run Kiln
Kiln is the latest long-running merge testnet. It provides the perfect opportunity to verify your setup works as expected through the proof-of-stake transition and in a post-merge context. If you come across any issues, please report them here.
N.B. Post merge, Node runners will need to run both a consensus and execution layer client.
1. Preparation
1.1 Download configs
To download the merge testnet configurations, run:
git clone https://github.com/eth-clients/merge-testnets.git
cd merge-testnets/kiln
1.2 Generate secret
To generate and write the JWT secret to a file, run:
openssl rand -hex 32 | tr -d "\n" > "/tmp/jwtsecret"
You will need to pass this file to both the Execution Client and the Consensus Client (the JWT secret is an authentication mechanism between CL/EL).
2. Execution client
We recommend running either Nethermind or Geth with Nimbus
Nethermind
2.1N Clone and build
Clone and build the kiln
branch of Nethermind:
git clone --recursive -b kiln https://github.com/NethermindEth/nethermind.git
cd nethermind/src/Nethermind
dotnet build Nethermind.sln -c Release
2.2N Start the client
Start Nethermind:
cd kiln/nethermind/src/Nethermind/Nethermind.Runner
dotnet run -c Release -- --config kiln --JsonRpc.Host=0.0.0.0 --JsonRpc.JwtSecretFile=/tmp/jwtsecret
Geth
2.1G Clone and build
Clone and build the merge-kiln-v2
branch from Marius' fork of Geth:
git clone -b merge-kiln-v2 https://github.com/MariusVanDerWijden/go-ethereum.git
cd go-ethereum
make geth
cd ..
2.2G Start the client
Start Geth:
cd kiln
./go-ethereum/build/bin/geth init genesis.json --datadir "geth-datadir"
./go-ethereum/build/bin/geth --datadir "geth-datadir" --http --http.api="engine,eth,web3,net,debug" --ws --ws.api="engine,eth,web3,net,debug" --http.corsdomain "*" --networkid=1337802 --syncmode=full --authrpc.jwtsecret=/tmp/jwtsecret --bootnodes "enode://c354db99124f0faf677ff0e75c3cbbd568b2febc186af664e0c51ac435609bade[email protected]164.92.130.5:30303" console
3. Nimbus
3.1 Clone and build Nimbus from source
Clone and build Nimbus from source from the kiln-dev-auth
branch:
git clone --branch=kiln-dev-auth https://github.com/status-im/nimbus-eth2.git
cd nimbus-eth2
make update OVERRIDE=1
make nimbus_beacon_node
cd ..
3.2 Start the client
Start Nimbus:
nimbus-eth2/build/nimbus_beacon_node \
--network=merge-testnets/kiln \
--web3-url=ws://127.0.0.1:8551 \
--rest \
--metrics \
--log-level=DEBUG \
--terminal-total-difficulty-override=20000000000000 \
--jwt-secret="/tmp/jwtsecret"
Useful resources
-
Kiln landing page: add the network to your browser wallet, view block explorers, request funds from the faucet, and connect to a JSON RPC endpoint.
-
Kiln validator launchpad: make a deposit for your validator.
-
EF launchpad notes: how to run a node on Kiln
-
Ethereum On Arm Kiln RP4 image: Run Nimbus on a raspberry pi or using an AWS AMI
Install dependencies
The Nimbus beacon chain can run on Linux, macOS, Windows, and Android. At the moment, Nimbus has to be built from source, which means you'll need to install some dependencies.
Time
The beacon chain relies on your computer having the correct time set (plus or minus 0.5 seconds).
We recommended you run a high quality time service on your computer such as chrony. Chrony is much more performant than the default NTP server. It's a simple install:
# Debian and Ubuntu
sudo apt-get install -y chrony
# Fedora
dnf install chrony
# Archlinux, using an AUR manager
yourAURmanager chrony
Chrony will uninstall any existing NTP servers.
It's available on most package managers.
Once installed, the default configuration is good enough.
At a minimum, you should run an NTP client (such as chrony) on the server. Note that most operating systems (including macOS') automatically sync with NTP by default.
If the above sounds like latin to you, don't worry. You should be fine as long as you haven't messed around with the time and date settings on your computer (they should be set automatically).
External Dependencies
- Developer tools (C compiler, Make, Bash, Git)
Nimbus will build its own local copy of Nim, so Nim is not an external dependency,
Linux
On common Linux distributions the dependencies can be installed with
# Debian and Ubuntu
sudo apt-get install build-essential git
# Fedora
dnf install @development-tools
# Archlinux, using an AUR manager
yourAURmanager -S base-devel
macOS
Assuming you use Homebrew to manage packages
brew install cmake
Windows
To build Nimbus on windows, the Mingw-w64 build environment is recommended.
Install Mingw-w64 for your architecture using the "MinGW-W64 Online Installer":
- Select your architecture in the setup menu (
i686
on 32-bit,x86_64
on 64-bit) - Set threads to
win32
- Set exceptions to "dwarf" on 32-bit and "seh" on 64-bit.
- Change the installation directory to
C:\mingw-w64
and add it to your system PATH in"My Computer"/"This PC" -> Properties -> Advanced system settings -> Environment Variables -> Path -> Edit -> New -> C:\mingw-w64\mingw64\bin
(C:\mingw-w64\mingw32\bin
on 32-bit)
Install Git for Windows and use a "Git Bash" shell to clone and build nimbus-eth2
.
Note: If the online installer isn't working you can try installing
Mingw-w64
through MSYS2.
Android
- Install the Termux app from FDroid or the Google Play store
- Install a PRoot of your choice following the instructions for your preferred distribution. Note, the Ubuntu PRoot is known to contain all Nimbus prerequisites compiled on Arm64 architecture (the most common architecture for Android devices).
Assuming you use Ubuntu PRoot
apt install build-essential git
Build the beacon node
Prerequisites
Before building and running the application, make sure you've gone through the installed the required dependencies.
Building the node
1. Clone the nim beacon chain repository
git clone https://github.com/status-im/nimbus-eth2
cd nimbus-eth2
2. Run the beacon node build process
To build the Nimbus beacon node and it's dependencies, run:
make nimbus_beacon_node
Sync from scratch
To minimize the amount of downtime, you should ensure that your beacon node is completely synced before submitting your deposit. If it's not fully synced you will miss attestations and proposals until it has finished syncing.
This is particularly important if you are joining a network that's been running for a while since the sync could take some time.
Tip: If you'd like to sync faster and start attesting immediately, we recommend taking a look at trusted node sync
N.B. In order to process incoming validator deposits from the eth1 chain, you'll need to run an eth1 client (web3 provider) in parallel to your eth2 client. See here for instructions on how to do so.
Testnet
To start syncing the prater
testnet , from the nimbus-eth2
repository, run:
./run-prater-beacon-node.sh --web3-url="<YOUR_WEB3_PROVIDER_URL>"
Mainnet
To start syncing the eth2 mainnet, run:
./run-mainnet-beacon-node.sh --web3-url="<YOUR_WEB3_PROVIDER_URL>"
You should see the following output:
INF 2020-12-01 11:25:33.487+01:00 Launching beacon node
...
INF 2020-12-01 11:25:34.556+01:00 Loading block dag from database topics="beacnde" tid=19985314 file=nimbus_beacon_node.nim:198 path=build/data/shared_prater_0/db
INF 2020-12-01 11:25:35.921+01:00 Block dag initialized
INF 2020-12-01 11:25:37.073+01:00 Generating new networking key
...
NOT 2020-12-01 11:25:59.512+00:00 Eth1 sync progress topics="eth1" tid=21914 file=eth1_monitor.nim:705 blockNumber=3836397 depositsProcessed=106147
NOT 2020-12-01 11:26:02.574+00:00 Eth1 sync progress topics="eth1" tid=21914 file=eth1_monitor.nim:705 blockNumber=3841412 depositsProcessed=106391
...
INF 2020-12-01 11:26:31.000+00:00 Slot start topics="beacnde" tid=21815 file=nimbus_beacon_node.nim:505 lastSlot=96566 scheduledSlot=96567 beaconTime=1w6d9h53m24s944us774ns peers=7 head=b54486c4:96563 headEpoch=3017 finalized=2f5d12e4:96479 finalizedEpoch=3014
INF 2020-12-01 11:26:36.285+00:00 Slot end topics="beacnde" tid=21815 file=nimbus_beacon_node.nim:593 slot=96567 nextSlot=96568 head=b54486c4:96563 headEpoch=3017 finalizedHead=2f5d12e4:96479 finalizedEpoch=3014
...
If you want to put the database somewhere else, (e.g. an external ssd) pass the
--data-dir=/your/path
. ⚠️ If you do this, remember to pass this flag to all your nimbus calls.
Command line options
You can pass any nimbus_beacon_node
options to the prater
and mainnet
scripts. For example, if you wanted to launch Nimbus on prater
with a different base port, say 9100
, you would run:
./run-prater-beacon-node.sh --tcp-port=9100 --udp-port=9100
To see a list of the command line options availabe to you, with descriptions, navigate to the build
directory and run:
./nimbus_beacon_node --help
Keep track of your sync progress
See here for how to keep track of your sync progress.
Sync from a trusted node
Note: This feature is available from
v1.7.0
onwards
When you start the beacon node for the first time, it will connect to the beacon chain network and start syncing automatically, a process that can take several days.
Trusted node sync allows you to get started more quickly with Nimbus by fetching a recent checkpoint from a trusted node (we will expect it will save you 1 to 2 days).
To use trusted node sync, you must have access to a node that you trust that exposes the Ethereum Beacon API (for example a locally running backup node).
Should this node, or your connection to it, be compromised, your node will not be able to detect whether or not it is being served false information.
It is possibly to use trusted node sync with a third-party API provider -- see here for how to verify that the chain you are given corresponds to the canonical chain at the time.
Perform a trusted node sync
Tip: Make sure to replace
http://localhost:5052
in the commands below with the appropriate endpoint for you.http://localhost:5052
is the endpoint exposed by Nimbus but this is not consistent across all clients. For example, if your trusted node is a Prysm node, it exposes127.0.0.1:3500
by default. Which means you would run the commands below with
--trusted-node-url=http://127.0.0.1:3500
Mainnet
To sync Mainnet, from the nimbus-eth2
directory run:
build/nimbus_beacon_node trustedNodeSync --network:mainnet \
--data-dir=build/data/shared_mainnet_0 \
--trusted-node-url=http://localhost:5052
Prater (testnet)
To sync Prater, from the nimbus-eth2
directory run:
build/nimbus_beacon_node trustedNodeSync --network:prater \
--data-dir=build/data/shared_prater_0 \
--trusted-node-url=http://localhost:5052
Note: Because trusted node sync by default copies all blocks via REST, if you use a third-party service to sync from, you may hit API limits. If this happens to you, you may need to use the
--backfill
option to delay the backfill of the block history.
Verify you synced the correct chain
When performing a trusted node sync, you can manually verify that the correct chain was synced by comparing the head hash with other sources (e.g. your friends, forums, chats and web sites). If you're syncing using your own backup node you can retrieve the current head from the node using:
# Make sure to enable the `--rest` option when running your node:
curl http://localhost:5052/eth/v1/beacon/blocks/head/root
The head
root is also printed in the log output at regular intervals.
Note: this same Beacon API request should work with any third-party provider.
For example, to test it out with our mainnet testing server, you could run:
curl -X GET http://testing.mainnet.beacon-api.nimbus.team/eth/v1/beacon/blocks/head/root
Advanced
Delay block history backfill
By default, both the state and the full block history will be downloaded from the trusted node.
It is possible to get started more quickly by delaying the backfill of the block history using the --backfill=false
parameter. In this case, the beacon node will first sync to the current head so that it can start performing its duties, then backfill the blocks from the network.
You can also resume the trusted node backfill at any time by simply running the trusted node sync command again.
Warning: While backfilling blocks, your node will not be able to answer historical requests or sync requests. This might lead to you being de-scored, and eventually disconnected, by your peers.
Modify sync point
By default, the node will sync up to the latest finalized checkpoint of the node that you're syncing with. While you can choose a different sync point using a block hash or a slot number, this block must fall on an epoch boundary:
build/nimbus_beacon_node trustedNodeSync --blockId:0x239940f2537f5bbee1a3829f9058f4c04f49897e4d325145153ca89838dfc9e2 ...
Sync from checkpoint files
If you have a state and a block file available, you can start the node using the finalized checkpoint options:
# Obtain a state and a block from a Beacon API - these must be in SSZ format:
curl -o state.32000.ssz -H 'Accept: application/octet-stream' http://localhost:5052/eth/v2/debug/beacon/states/32000
curl -o block.32000.ssz -H 'Accept: application/octet-stream' http://localhost:5052/eth/v2/beacon/blocks/32000
build/nimbus_beacon_node --data-dir:trusted --finalized-checkpoint-block=block.32000.ssz --finalized-checkpoint-state=state.32000.ssz
Recreate historical state access indices
When performing checkpoint sync, the historical state data from the time before the checkpoint is not available. To recreate the indices and caches necessary for historical state access, run trusted node sync with the --reindex
flag - this can be done on an already-synced node as well, in which case the process will simply resume where it left off:
build/nimbus_beacon_node trustedNodeSync --reindex=true
Add a backup web3 provider
It's a good idea to add a backup web3 provider in case your main one goes down. You can do this by simply repeating the --web3-url
parameter on launch.
Warning: As of
v1.7.0
Nimbus will no longer automagically rewrite HTTP(S) web3 URLs to their respective WebSocket alternatives.
For example, if your primary EL client is a local Geth, but you want to use Infura as a backup you would run:
./run-mainnet-beacon-node.sh \
--web3-url="ws://127.0.0.1:8546" \
--web3-url="wss://mainnet.infura.io/ws/v3/..."
Make a deposit for your validator
The easiest way to get your deposit in is to follow the Ethereum Foundation's launchpad instructions here:
Prater testnet: https://prater.launchpad.ethereum.org/
Use Prater to stress test / future proof your set up against peak mainnet load. See here for all you need to know
Mainnet: https://launchpad.ethereum.org/
⚠️ If you are making a mainnet deposit make sure you verify that the deposit contract you are interacting with is the correct one.
You should verify that the address is indeed: 0x00000000219ab540356cBB839Cbe05303d7705Fa
You may notice that there have been considerable improvements to the launchpad process since the summer.
In particular, the Key Generation section is now much clearer, and you no longer have to install dependencies to get the command line app working.
We won't elaborate on each individual step here, since they are well explained on the site itself. However, there are two points of note:
1. Eth1 connection
In the Select Client
section you'll first be asked to choose an eth1 client. You need to run an eth1 client in order to process incoming validator deposits from the eth1 chain.
We recommend you choose Go Ethereum
(or Geth
).
If you've followed the book up to this point, you should already have geth up and running.
2. Block explorer
Once you've sent off your transaction, you should see the following screen.
We recommend you click on Beaconchain
. This will open up a window that allows you to keep track of your validator's status.
It's a good idea to bookmark this page.
Expected waiting time (the queue)
Once you send off your transaction(s), your validator will be put in a queue based on deposit time. Getting through the queue may take a few hours or days (assuming the chain is finalising). No validators are accepted into the validator set while the chain isn't finalising. The Pending Validators
metric on the beaconcha.in will give you the size of the queue.
Import your validator keys into Nimbus
To import your signing key(s) into Nimbus, copy the validator_keys
directory -- the directory that was created for you when you generated your keys using the command line app -- into nimbus-eth2
. Then run:
Prater
build/nimbus_beacon_node deposits import --data-dir=build/data/shared_prater_0
Mainnet
build/nimbus_beacon_node deposits import --data-dir=build/data/shared_mainnet_0
Note: You can also specify a different path to your
validator_keys
directory as follows:Prater
build/nimbus_beacon_node deposits import \ --data-dir=build/data/shared_prater_0 "<YOUR VALIDATOR KEYS DIRECTORY>"
Mainnet
build/nimbus_beacon_node deposits import \ --data-dir=build/data/shared_mainnet_0 "<YOUR VALIDATOR KEYS DIRECTORY>"
Replacing
<YOUR VALIDATOR KEYS DIRECTORY>
with the full pathname of thevalidator_keys
directory that was created when you generated your keys using the command line app.
Tip: You can run
pwd
in yourvalidator_keys
directory to print the full pathname to the console (if you're on Windows, runcd
instead).
You'll be asked to enter the password you created to encrypt your keystore(s).
Don't worry, this is entirely normal. Your validator client needs both your signing keystore(s) and the password encrypting it to import your key (since it needs to decrypt the keystore in order to be able to use it to sign on your behalf).
Note: If you come across an error, it's probably because the wrong permissions have been set on either a folder or file. See here for how to fix this.
Storage
When you import your keys into Nimbus, your validator signing key(s) are stored in the build/data/shared_<prater or mainnet>_0/
folder, under secrets
and validators
- make sure you keep these folders backed up somewhere safe.
The secrets
folder contains the common secret that gives you access to all your validator keys.
The validators
folder contains your signing keystore(s) (encrypted keys). Keystores are used by validators as a method for exchanging keys. For more on keys and keystores, see here.
Note: The Nimbus client will only ever import your signing key. In any case, if you used the deposit launchpad, this is the only key you should have (thanks to the way these keys are derived, it is possible to generate the withdrawal key from your mnemonic when you wish to withdraw).
Export
Todo
Connect your validator to eth2
Prater
To connect your validator to the Prater testnet, from the nimbus-eth2
repository run:
./run-prater-beacon-node.sh
Mainnet
To connect your validator to mainnet, from the nimbus-eth2
repository run:
./run-mainnet-beacon-node.sh
In both cases, you'll be asked to enter your Web3 provider URL again.
Note: If your beacon node is already running, you'll need to shut it down gracefully (
Ctrl+c
) and re-run the above command.
To ensure your Validator is correctly monitoring the eth1 chain, it's important you enter a valid web3 provider.
Your beacon node will launch and connect your validator to the eth2 network. To check that this has happened correctly, check your logs for the following:
INF 2020-11-18 11:20:00.181+01:00 Launching beacon node
...
NOT 2020-11-18 11:20:02.091+01:00 Local validator attached
Keep an eye on your validator
The best way to keep track of your validator's status is using the beaconcha.in
explorer (click on the orange magnifying glass at the very top and paste in your validator's public key):
- Testnet: prater.beaconcha.in
- Mainnet: beaconcha.in
If you deposit after the genesis state was decided, your validator(s) will be put in a queue based on deposit time, and will slowly be inducted into the validator set after genesis. Getting through the queue may take a few hours or a day or so.
You can even create an account (testnet link, mainnet link) to add alerts and keep track of your validator's performance (testnet link, mainnet link).
Make sure your validator is attached
On startup, you should see a log message that reads Local validator attached
. This has a pubkey
field which should the public key of your validator.
Check your IP address
Check that Nimbus has recognised your external IP properly. To do this, look at the end of the first log line:
Starting discovery node","topics":"discv5","tid":2665484,"file":"protocol.nim:802","node":"b9*ee2235:<IP address>:9000"
<IP address>
should match your external IP (the IP by which you can be reached from the internet).
Note that the port number is displayed directly after the IP -- in the above case 9000
. This is the port that should be opened and mapped.
Keep track of your syncing progress
To keep track of your sync progress, pay attention to the Slot start
messages in your logs:
INF 2021-05-24 14:53:59.067+02:00 Slot start
topics="beacnde" tid=3485464 file=nimbus_beacon_node.nim:968 lastSlot=1253067 wallSlot=1253068 delay=67ms515us0ns
peers=22
head=eb994064:90753
headEpoch=2836
finalized=031b9591:90688
finalizedEpoch=2834
sync="PPPPPDDDDP:10:15.4923:7.7398:01d17h43m (90724)"
Where:
peers
tells you how many peers you're currently connected to (in the above case, 35 peers)finalized
tells you the most recent finalized epoch you've synced to so far (the 8765th epoch)head
tells you the most recent slot you've synced to so far (the 2nd slot of the 8767th epoch)sync
tells you how fast you're syncing right now (15.4923
blocks per second), your average sync speed since you stared (7.7398
blocks per second), the time left until you're fully synced (01d17h43m
) how many blocks you've synced so far (90724
), along with information about 10 sync workers linked to the 10 most performant peers you are currently connected to (represented by a string of letters and a number).
The string of letters -- what we call the sync worker map
(in the above case represented by wPwwwwwDwwDPwPPPwwww
) represents the status of the sync workers mentioned above, where:
s - sleeping (idle),
w - waiting for a peer from PeerPool,
R - requesting blocks from peer
D - downloading blocks from peer
P - processing/verifying blocks
U - updating peer's status information
The number following it (in the above case represented by 10
) represents the number of workers that are currently active (i.e not sleeping or waiting for a peer).
Note: You can also use you the RPC calls outlined in the API page to retrieve similar information.
Recover lost keys and generate new ones
Your mnemonic can be used to recover lost keys and generate new ones.
Every time you generate a keystore from your mnemomic, that keystore is assigned an index. The first keystore you generate has index 0, the second index 1, etc. You can recover any key using your mnemonic and that key's index. For more on how keys are derived, see this excellent post.
To stay consistent with the rest of the book, we'll take you though how to do this using the deposit-cli's binary executable.
Specifically, we'll be using the existing-mnemonic
command. Here's a description of the command from the deposit-cli's README:
This command is used to re-generate or derive new keys from your existing mnemonic. Use this command, if (i) you have already generated keys with this CLI before, (ii) you want to reuse your mnemonic that you know is secure that you generated elsewhere (reusing your eth1 mnemonic .etc), or (iii) you lost your keystores and need to recover your keys.
Recover existing key
⚠️ Recovering validator keys from a mnemonic should only be used as a last resort. Exposing your mnemonic to a computer at any time puts it at risk of being compromised. Your mnemonic is not encrypted and if leaked, can be used to steal your funds.
N.B. the commands below assume you are trying to recover your original key, hence --validator_start_index
has been set to 0
.
Run the following command from the directory which contains the deposit
executable:
Prater
./deposit existing-mnemonic \
--validator_start_index 0 \
--num_validators 1 \
--chain prater
Mainnet
./deposit existing-mnemonic \
--validator_start_index 0 \
--num_validators 1 \
--chain mainnet
You'll be prompted to enter your mnemonic, and a new password for your keystore.
Check that the validator_keys
directory contains your extra keystore.
Copy the validator_keys
directory to nimbus-eth2
and then follow the instructions here. Your key will be added to your node on next restart.
Generate another key
⚠️ If you wish to generate another validator key, you must take great care to not generate a copy of your original key. Running the same key on two different validator clients will likely get you slashed.
N.B. the commands below assume you already have one key and wish to generate a second, hence --validator_start_index
has been set to 1
(as 0
would be the original key).
Run the following command from the directory which contains the deposit
executable:
Prater
./deposit existing-mnemonic \
--validator_start_index 1 \
--num_validators 1 \
--chain prater
Mainnet
./deposit existing-mnemonic \
--validator_start_index 1 \
--num_validators 1 \
--chain mainnet
You'll be prompted to enter your mnemonic, and a new password for your keystore.
Check that the validator_keys
directory contains an extra keystore.
Copy the validator_keys
directory to nimbus-eth2
.
Make sure you've made a deposit for your new keystore, and then follow the instructions here. Your key will be added to your node on next restart.
Perform a voluntary exit
⚠️ Voluntary exits are irreversible. You won't be able to validate again with the same key. And you won't be able to withdraw your stake until the Eth1 and Eth2 merge. Note that voluntary exits won't be processed if the chain isn't finalising.
To perform a voluntary exit, make sure your beacon node is running with the --rpc
option enabled (e.g. ./run-mainnet-beacon-node.sh --rpc
), then run:
Prater
build/nimbus_beacon_node deposits exit \
--validator=<VALIDATOR_PUBLIC_KEY> \
--data-dir=build/data/shared_prater_0
Mainnet
build/nimbus_beacon_node deposits exit \
--validator=<VALIDATOR_PUBLIC_KEY> \
--data-dir=build/data/shared_mainnet_0
Note: Make sure your
<VALIDATOR_PUBLIC_KEY>
is prefixed with0x
. In other words the public key should look like0x95e3...
rest-url
parameter
As of v1.7.0
the deposits exit
command can accept a --rest-url
parameter. This means you can issue exits with any REST API compatible beacon node
Add an additional validator
To add an additional validator, just follow the same steps as you did when you added your first.
You'll have to restart the beacon node for the changes to take effect.
Note that a single Nimbus instance is able to handle multiple validators.
Monitor attestation performance
Use the ncli_db validatorPerf
command to create a report for the attestation performance of your validator over time.
Steps
Make sure you're in the nimbus-eth2
repository.
1. Build ncli_db
The first step is to build ncli_db
:
make ncli_db
2. View options
To view the options available to you, run:
build/ncli_db --help
At the top you should see
ncli_db [OPTIONS]... command
The following options are available:
--db Directory where `nbc.sqlite` is stored.
--network The Eth2 network preset to use.
Where:
-
The
network
can either bemainnet
orprater
-
The default location of the
db
is eitherbuild/data/shared_mainnet_0/db
orbuild/data/shared_prater_0/db
Near the bottom, you should see
ncli_db validatorPerf [OPTIONS]...
The following options are available:
--start-slot Starting slot, negative = backwards from head [=-128 * SLOTS_PER_EPOCH.int64].
--slots Number of slots to run benchmark for, 0 = all the way to head [=0].
Use start-slot
and slots
to restrict the analysis on a specific block range.
3. Run
To view the performance of all validators on Prater so far across the entire block range stored in your database, run:
build/ncli_db validatorPerf \
--network=prater \
--db=build/data/shared_prater_0/db
You should see output that looks like to the following:
validator_index,attestation_hits,attestation_misses,head_attestation_hits,head_attestation_misses,target_attestation_hits,target_attestation_misses,delay_avg,first_slot_head_attester_when_first_slot_empty,first_slot_head_attester_when_first_slot_not_empty
0,128,0,127,1,128,0,1.0078125,0,3
1,128,0,125,3,127,1,1.0078125,0,2
2,128,0,127,1,127,1,1.0078125,0,5
...
4. Adjust to target a specific block range
To restrict the analysis to the performance between slots 0 and 128, say, run:
build/ncli_db validatorPerf \
--network=prater \
--db=build/data/shared_prater_0/db \
--start-slot=0 \
--slots=128
5. Compare my validators to the global average
We'll use Paul Hauner's wonderful workbook as a template. This workbook consists of three inter-related spreadsheets - Summary
, My Validators
, and datasource
.
-
Make a copy of the document
-
Remove the table entries in
My Validators
and delete everything in thedatasource
sheet -
Import the output from
validatorPerf
todatasource
- the easiest way to do this is to pipe the output to acsv
, remove the first few lines, and import thecsv
intodatasource
-
Manually copy over your validator(s) to the
My Validators
sheet - the easiest way to find your validator'svalidator_index
is to search for it by its public key on beaconcha.in (for example, this validator's index is 115733) -
Go to the
Summary
page and view your results
Resources
The workbook's method is explained here.
Validator monitoring
⚠️ This feature is currently in BETA - implementation details such as metric names and counters may change in response to community feedback.
The validator monitoring feature allows for tracking the life-cycle and performance of one or more validators in detail.
Monitoring can be carried out for any validator, with slightly more detail for validators that are running in the same beacon node.
Every time the validator performs a duty, the duty is recorded and the monitor keeps track of the reward-related events for having performed it. For example:
- When attesting, the attestation is added to an aggregate, then a block, before a reward is applied to the state
- When performing sync committee duties, likewise
Validator actions can be traced either through logging, or comprehensive metrics that allow for creating alerts in monitoring tools.
The metrics are broadly compatible with Lighthouse, thus dashboards and alerts can be used with either client with minor adjustments.
Enabling validator monitoring
The monitor can be enabled either for all keys that are used with a particular beacon node, or for a specific list of validators, or both.
# Enable automatic monitoring of all validators used with this beacon node
./run-mainnet-beacon-node.sh --validator-monitor-auto
# Enable monitoring of one or more specific validators
./run-mainnet-beacon-node.sh \
--validator-monitor-pubkey=0xa1d1ad0714035353258038e964ae9675dc0252ee22cea896825c01458e1807bfad2f9969338798548d9858a571f7425c \
--validator-monitor-pubkey=0xb2ff4716ed345b05dd1dfc6a5a9fa70856d8c75dcc9e881dd2f766d5f891326f0d10e96f3a444ce6c912b69c22c6754d
# Publish metrics as totals for all monitored validators instead of each validator separately - used for limiting the load on metrics when monitoring many validators
./run-mainnet-beacon-node.sh --validator-monitor-totals
Understanding monitoring
When a validator performs a duty, such as signing an attestation or a sync committee message, this is broadcast to the network. Other nodes pick it up and package the message into an aggregate and later a block. The block is included in the canonical chain and a reward is given two epochs (~13 minutes) later.
The monitor tracks these actions and will log each step at the INF
level. If any step is missed, a NOT
log is shown instead.
The typical lifecycle of an attestation might look something like the following:
INF 2021-11-22 11:32:44.228+01:00 Attestation seen topics="val_mon" attestation="(aggregation_bits: 0b0000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, data: (slot: 2656363, index: 11, beacon_block_root: \"bbe7fc25\", source: \"83010:a8a1b125\", target: \"83011:6db281cd\"), signature: \"b88ef2f2\")" src=api epoch=83011 validator=b93c290b
INF 2021-11-22 11:32:51.293+01:00 Attestation included in aggregate topics="val_mon" aggregate="(aggregation_bits: 0b1111111101011111001101111111101100111111110100111011111110110101110111111010111111011101111011101111111111101111100001111111100111, data: (slot: 2656363, index: 11, beacon_block_root: \"bbe7fc25\", source: \"83010:a8a1b125\", target: \"83011:6db281cd\"), signature: \"8576b3fc\")" src=gossip epoch=83011 validator=b93c290b
INF 2021-11-22 11:33:07.193+01:00 Attestation included in block attestation_data="(slot: 2656364, index: 9, beacon_block_root: \"c7761767\", source: \"83010:a8a1b125\", target: \"83011:6db281cd\")" block_slot=2656365 inclusion_lag_slots=0 epoch=83011 validator=b65b6e1b
The lifecycle of a particular message can be traced by following the epoch=.... validator=...
fields in the message.
Failures at any point are recorded at a higher logging level, such as NOT
(ice):
NOT 2021-11-17 20:53:42.108+01:00 Attestation failed to match head topics="chaindag" epoch=81972 validator=...
Failures are reported with a lag of two epochs (~13 minutes) - to examine the log for potential root causes, the logs from the epoch in the failure message should be looked at.
⚠️ It should be noted that metrics are tracked for the current history - in the case of a reorg on the chain - in particular a deep reorg - no attempt is made to revisit previously reported values. In the case that finality is delayed, the risk of stale metrics increases.
Likewise, many metrics, such as aggregation inclusion, reflect conditions on the network - it may happen that the same message is counted more than once under certain conditions.
Monitoring metrics
The full list of metrics supported by the validator monitoring feature can be seen in the source code or by examining the metrics output:
curl -s localhost:8008/metrics | grep HELP.*validator_
Upgrade / downgrade Nimbus
Make sure you stay on the lookout for any critical updates to Nimbus. This best way to do so is through the announcements channel on our discord. The release page can be found here.
Note: If your beacon node is already running, you'll need to disconnect and reconnect for the changes to take effect.
To update to the latest version, either download the binary or compile the beacon node release (see below).
Tip: To check which version of Nimbus you're currently running, run
build/nimbus_beacon_node --version
Download the binary
Open the latest Nimbus release and copy the link for the file that works on your system.
wget <insert download link here>
tar -xzf nimbus-eth2_Linux_arm64v8*.tar.gz -C nimbus-eth2
rm nimbus-eth2_Linux_arm64v8*.tar.gz
Compile the beacon node release
Run:
git pull && make update
Followed by:
make nimbus_beacon_node
Now, restart your node.
Tip: In order to minimise downtime, we recommend updating and rebuilding the beacon node before restarting.
Urgency guidelines
As of v1.4.0
, releases are marked with the following tags:
low-urgency
: update at your own convenience, sometime within our normal update cycle of two weeks
medium-urgency
: may contain an important stability fix, it is better to update sooner rather than later
high-urgency
: update as soon as you can, this is a critical update required for Nimbus to function correctly
Install a specific version
Occassionally you may need to either upgrade or downgrade to a specific version of Nimbus.
To pull a specific version of Nimbus (e.g. v1.3.0
), run:
git checkout v1.3.0 && make update
Followed by:
make nimbus_beacon_node
Now, restart your node.
Note: Alternatively, you can grab the appropriate binary release - create a backup of your
build
folder, then download the appropriate binary from here: https://github.com/status-im/nimbus-eth2/releases/tag/v1.3.0
Go back to stable
If you need to go back to the latest (stable) version, run:
git checkout stable && make update
Followed by
make nimbus_beacon_node
Don't forget to restart your node.
Run an Execution layer node
In order to process incoming validator deposits from the Execution layer, you'll need to run an EL client in parallel to your CL client.
On this page we provide instructions for using Geth (however, any reputable EL client shOuld do the trick).
Note: If you have a > 500GB SSD, and your hardware can handle it, we strongly recommend running your own eth1 client. This will help ensure the network stays as decentralised as possible. If you can't however, the next best option is to set up a 3rd part provider like infura.
Nimbus
In parallel to nimbus-eth2
we are working hard on our our EL client. While this is very much a project in development (i.e. not yet ready for public consumption), we welcome you to experiment with it.
Nethermind
TBC
Geth
1. Install Geth
If you're running MacOS, follow the instructions listed here to install geth. Otherwise see here.
2. Start Geth
Once you have geth installed, use the following command to start your eth1 node:
Testnet
geth --goerli --ws
Mainnet
geth --ws
Note: The
--ws
flag is needed to enable the websocket RPC API. This allows Nimbus to query the eth1 chain using Web3 API calls.
3. Leave Geth running
Let it sync - Geth uses a fast sync mode by default. It may take anywhere between a few hours and a couple of days.
N.B. It is safe to run Nimbus and start validating even if Geth hasn't fully synced yet
You'll know Geth has finished syncing, when you start seeing logs that look like the following:
INFO [05-29|01:14:53] Imported new chain segment blocks=1 txs=2 mgas=0.043 elapsed=6.573ms mgasps=6.606 number=3785437 hash=f72595…c13f23
INFO [05-29|01:15:08] Imported new chain segment blocks=1 txs=3 mgas=0.067 elapsed=7.639ms mgasps=8.731 number=3785441 hash=be7e55…a8c1c7
INFO [05-29|01:15:25] Imported new chain segment blocks=1 txs=21 mgas=1.084 elapsed=33.610ms mgasps=32.264 number=3785442 hash=fd54be…79b047
INFO [05-29|01:15:42] Imported new chain segment blocks=1 txs=26 mgas=0.900 elapsed=26.209ms mgasps=34.335 number=3785443 hash=2504ff…119622
INFO [05-29|01:15:59] Imported new chain segment blocks=1 txs=12 mgas=1.228 elapsed=22.693ms mgasps=54.122 number=3785444 hash=951dfe…a2a083
INFO [05-29|01:16:05] Imported new chain segment blocks=1 txs=3 mgas=0.065 elapsed=5.885ms mgasps=11.038 number=3785445 hash=553d9e…fc4547
INFO [05-29|01:16:10] Imported new chain segment blocks=1 txs=0 mgas=0.000 elapsed=5.447ms mgasps=0.000 number=3785446 hash=5e3e7d…bd4afd
INFO [05-29|01:16:10] Imported new chain segment blocks=1 txs=1 mgas=0.021 elapsed=7.382ms mgasps=2.845 number=3785447 hash=39986c…dd2a01
INFO [05-29|01:16:14] Imported new chain segment blocks=1 txs=11 mgas=1.135 elapsed=22.281ms mgasps=50.943 number=3785444 hash=277bb9…623d8c
Geth accepts connections from the loopback interface (127.0.0.1
), with default WebSocket port 8546
. This means that your default Web3 provider URL should be: ws://127.0.0.1:8546
Obtain Goerli ETH
To participate in an eth2 testnet, you need to stake 32 testnet ETH. You can request this testnet ETH by joining the ethstaker discord - look for the #request-goerli-eth
channel.
Set up a systemd service
This page will take you through how to set up a systemd
service for your beacon node.
Systemd is used in order to have a command or program run when your device boots (i.e. add it as a service). Once this is done, you can start/stop enable/disable from the linux prompt.
systemd
is a service manager designed specifically for Linux. There is no port to Mac OS. You can get more information from https://www.raspberrypi.org/documentation/linux/usage/systemd.md or https://fedoramagazine.org/what-is-an-init-system/
1. Create a systemd service
⚠️ If you wish to run the service with metrics enabled, you'll need to replace
--metrics:off
with--metrics:on
in the service file below. See here for more on metrics.
Create a systemd
service unit file -- nimbus-eth2-prater.service
-- and save it in /lib/systemd/system/
(your Linux distribution might recommend another default - for example archlinux recommends /etc/systemd/system/
).
The contents of the file should look like this:
[Unit]
Description=Nimbus beacon node
[Service]
WorkingDirectory=<BASE-DIRECTORY>
ExecStart=<BASE-DIRECTORY>/build/nimbus_beacon_node \
--non-interactive \
--network=prater \
--data-dir=build/data/shared_prater_0 \
--web3-url=<WEB3-URL> \
--rpc:on \
--metrics:off
User=<USERNAME>
Group=<USERNAME>
Restart=always
[Install]
WantedBy=default.target
Where you should replace:
<BASE-DIRECTORY>
with the location of the nimbus-eth2
repository on your device.
<USERNAME>
with the username of the system user responsible for running the launched processes.
<WEB3-URL>
with the WebSocket JSON-RPC URL you are planning to use.
N.B. If you're running Nimbus on a Pi, your
<BASE-DIRECTORY>
is/home/pi/nimbus-eth2/
and your<USERNAME>
ispi
If you want to run on mainnet, simply replace all instances of
prater
withmainnet
.
2. Notify systemd of the newly added service
sudo systemctl daemon-reload
3. Start the service
sudo systemctl enable nimbus-eth2-prater --now
4. Monitor the service
sudo journalctl -u nimbus-eth2-prater.service
This will show you the Nimbus logs at the default setting -- it should include regular "slot start" messages which will show your sync progress.
To rewind logs - by one day, say - run:
sudo journalctl -u nimbus-eth2-prater.service --since yesterday
For more options, see here.
Further examples
- A systemd service file by Pawel Bylica which allows you to start two services at the same time: e.g.
[email protected]
and[email protected]
.
Log rotation
Nimbus logs are written to stdout
, and can be redirected to a file. Writing to a file for a long-running process may lead to difficulties when the file grows large. This is typically solved with a log rotator. A log rotator is responsible for switching the written-to file, as well as compressing and removing old logs.
Using logrotate
logrotate provides log rotation and compression. The corresponding package will install its Cron hooks (or Systemd timer) -- all you have to do is add a configuration file for Nimbus in "/etc/logrotate.d/nimbus-eth2":
/var/log/nimbus-eth2/*.log {
compress
missingok
copytruncate
}
The above assumes you've configured Nimbus to write its logs to "/var/log/nimbus-eth2/" (usually by redirecting stdout
and stderr
from your init script).
"copytruncate" is required because, when it comes to moving the log file, logrotate
's default behaviour requires application support for re-opening that log file at runtime (something which is currently lacking). So, instead of a move, we tell logrotate
to do a copy and a truncation of the existing file. A few log lines may be lost in the process.
You can control rotation frequency and the maximum number of log files kept by using the global configuration file - "/etc/logrotate.conf":
# rotate daily
daily
# only keep logs from the last 7 days
rotate 7
Using rotatelogs
rotatelogs captures stdout
logging and redirects it to a file, rotating and compressing on the fly.
It is available on most servers and can be used with Docker
, Systemd
and manual setups to write rotated logs files.
In particular, when systemd
and its accompanying journald
log daemon are used, this setup avoids clogging the system log by keeping the Nimbus logs in a separate location.
Compression
rotatelogs
works by reading stdin
and redirecting it to a file based on a name pattern. Whenever the log is about to be rotated, the application invokes a shell script with the old and new log files. Our aim is to compress the log file to save space. The Nimbus-eth2 repo provides a helper script that does this:
# Create a rotation script for rotatelogs
cat << EOF > rotatelogs-compress.sh
#!/bin/sh
# Helper script for Apache rotatelogs to compress log files on rotation - `$2` contains the old log file name
if [ -f "$2" ]; then
# "nice" prevents hogging the CPU with this low-priority task
nice gzip -9 "$2"
fi
EOF
chmod +x rotatelogs-compress.sh
Run
The final step is to redirect logs to rotatelogs
using a pipe when starting Nimbus:
build/nimbus_beacon_node \
--network:prater \
--web3-url="$WEB3URL" \
--data-dir:$DATADIR 2>&1 | rotatelogs -L "$DATADIR/nbc_bn.log" -p "/path/to/rotatelogs-compress.sh" -D -f -c "$DATADIR/log/nbc_bn_%Y%m%d%H%M%S.log" 3600
The options used in this example do the following:
-L nbc_bn.log
- symlinks to the latest log file, for use withtail -F
-p "/path/to/rotatelogs-compress.sh"
- runsrotatelogs-compress.sh
when rotation is about to happen-D
- creates thelog
directory if needed-f
- opens the log immediately when startingrotatelogs
-c "$DATADIR/log/nbc_bn_%Y%m%d%H%M%S.log"
- includes timestamp in log filename3600
- rotates logs every hour (3600 seconds)
Deleting old logs
rotatelogs
will not do this for you, so you'll need a Cron script (or Systemd timer):
# delete log files older than 7 days
find "$DATADIR/log" -name 'nbc_bn_*.log' -mtime +7 -exec rm '{}' \+
Verify the integrity of Nimbus
We've recently added checksums to the end of our release notes (a practice we will be continuing from now on). Please make sure you get into the habit of verifying these 🙏
For those of you who are unfamiliar, a checksum is a special type of hash used to verify the integrity of a file. Verifying a checksum ensures there was no corruption or manipulation during the download and that the file was downloaded completely and correctly. For a short and simple guide on how to do so, see here.
In the case of the v1.1.0 release for example, the SHA512 checksums are:
# Linux AMD64
8d553ea5422645b5f06001e7f47051706ae5cffd8d88c45e4669939f3abb6caf41a2477431fce3e647265cdb4f8671fa360d392f423ac68ffb9459607eaab462 nimbus_beacon_node
# Linux ARM64
93ffd03a0ce67f7d035e3dc45e97de3c2c9a05a8dd0c6d5f45402ddb04404dc3cf15b80fee972f34152ef171ce97c40f794448bc779ca056081c945f71f19788 nimbus_beacon_node
# Linux ARM
f2e75f3fae2aea0a9f8d45861d52b0e2546c3990f453b509fab538692d18c64e65f58441c5492064fc371e0bc77de6bab970e05394cfd124417601b55cb4a825 nimbus_beacon_node
# Windows AMD64
fd68c8792ea60c2c72e9c2201745f9698bfd1dae4af4fa9e1683f082109045efebd1d80267f13cafeb1cd7414dc0f589a8a73f12161ac2758779369289d5a832 nimbus_beacon_node
Back up your database
The best way to do this is to simply copy it over: you'll find it either in build/data/shared_mainnet_0/db/
(if you're running Prater, shared_prater_0
) or the directory you supplied to the --data-dir
argument when you launched Nimbus).
Logging
:warning: The logging options outlined here are based on a preview feature, and are subject to change
Nimbus offers several options for logging - by default, logs are written to stdout using the chronicles textlines
format which is convenient to read and can be used with tooling for heroku/logfmt.
Change log level
You can customise Nimbus' verbosity with the --log-level
option.
For example:
./run-mainnet-beacon-node.sh --log-level=WARN
The default value is INFO
.
Possible values (in order of decreasing verbosity) are:
TRACE
DEBUG
INFO
NOTICE
WARN
ERROR
FATAL
NONE
Change logging style
Nimbus supports three log formats: colors
, nocolors
and json
. In auto
mode, logs will be printed using either colors
or nocolors
.
You can choose a log format with the --log-format
option, which also understands auto
and none
:
./run-mainnet-beacon-node.sh --log-format=none # disable logging to std out
./run-mainnet-beacon-node.sh --log-format=json # print json logs, one line per item
Logging to a file
To send logs to a file, you can redirect the stdout logs:
# log json to filename.jsonl
./run-mainnet-beacon-node.sh --log-format=json > filename.jsonl
We recommend keeping an eye on the growth of this file with a log rotator. Logs are written in the "JSON Lines" format - one json
entry per line.
Email notifications
You can create an account on beaconcha.in to set up email notifications in case your validator loses balance (goes offline), or gets slashed.
Tip: If your validator loses balance for two epochs in a row, you may want to investigate. It's a strong signal that it may be offline.
1. Sign up at beaconcha.in/register
2. Type your validator's public key into the searchbar
3. Click on the bookmark icon
4. Tick the boxes and select Add To Watchlist
Graffiti
You can use your node's graffiti flag to make your mark on history and forever engrave some words of your choice into an Ethereum block. You will be able to see it using the block explorer.
To do so on Prater, run:
./run-prater-beacon-node.sh --graffiti="<YOUR_WORDS>"
To do so on Mainnet, run:
./run-mainnet-beacon-node.sh --graffiti="<YOUR_WORDS>"
Optimise for profitability
Key insights:
- Profitability depends heavily on the network and peer quality
- While block proposals are more lucrative than attestations, they are much rarer
Check for next action before restarting
To see when your validator is next due to make an attestation or proposal pay attention to the Slot end
messages in your logs:
INF 2021-05-31 17:46:11.094+02:00 Slot end
topics="beacnde" tid=213670 file=nimbus_beacon_node.nim:932
slot=1304329
nextSlot=1304330
head=cffee454:38460
headEpoch=1201
finalizedHead=077da232:38368
finalizedEpoch=1199
nextAttestationSlot=338638
nextProposalSlot=-1
nextActionWait=4m35s874ms405us837ns
Specifically, have a look at nextActionWait
time.
If you're concerned about missing an attestation or proposal, wait until nextActionWait
is greater than 4 minutes or so before restarting Nimbus.
You can also use the nimbus-eth2
API. For example, to check if your validator has a next Proposal slot assigned, run:
curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_duties_proposer","params":[${HEAD_EPOCH_NUMBER}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq ".result[]" | grep ${PATTERN_WHICH_MATCHES_VALIDATOR_PUBLIC_KEYS}
Subscribe to all subnets
Launching the beacon node with the --subscribe-all-subnets
option increases bandwidth and cpu usage, but helps the network and makes the block production algorithm perform slightly better.
To elaborate a little, without this option enabled Nimbus only listens to a subset of the attestation traffic - in particular, Nimbus doesn't listen to all unaggregated traffic but instead relies on peers to aggregate attestations on the subnets it doesn't subscribe to.
With this option enabled, Nimbus listens to all unaggregated channels (subscribes to all subnets). Practically speaking, this means that when producing a block, Nimbus can "top up" the aggregates that other peers have made with it's own unaggregated attestations. This can lead to better packing in some cases, which can lead to slightly greater rewards.
Useful resources
Monitor the health of your node
The most important thing for the the health, performance and stablity of your node and the overall network is the strength of your node's network connectivity / peer count.
See here for our networking related tips and tricks.
Keep track of your attestation effectiveness
Attestation effectiveness is a metric that directly affects your validator rewards. In simple terms, an attestation is more valuable the sooner it is put into a block and included in the chain.
This interval is called the inclusion distance of an attestation. The smaller it is, the more profitable your validator will be. For a deeper understanding we highly recommend reading Attestant's wonderful blog post on the matter.
You can verify your validator's effectiveness on the beaconcha.in website.
Ideally you want to see a value above 80%.
While attestation effectiveness depends on a variety of factors - attestation network propagation, your network connectivity, and the peers you are connected to - your network connectivity is likely the most important factors you can control to improve this metric. Apart from the tips outlined on this guide, you could also experiment with subscribing to all subnets.
Monitor your system's network I/O usage
If you're a Linux user and want to track how much network I/O your system uses over time, you can install a nice utility called vnstat
.
To install, run:
sudo apt install vnstat
To run it:
TBC -See here for more
Keep an eye on the logs
Keep an eye on the metrics
Grafana
Relevant REST API queries
External tools
beaconchain
Network setup
Nimbus will automatically connect to peers based on the health and quality of peers that it's already connected to. Depending on the network and the number of validators attached to the node, Nimbus may need anywhere from 10 to 60 peers connected to operate well.
In addition to making outgoing connections, the beacon node node works best when others can connect to it - this speeds up the process of finding good peers.
To allow incoming connections, the node must be reachable via a public IP address. It must also be aware of this address, so that it can advertise it to its peers.
UPnP
By default, Nimbus uses UPnP to set up port forwarding and detect your external IP address. If you do not have UPnP enabled, you may need to pass additional command-line options to the node, as explained in subsequent sections.
Enabling UPnP is usually as simple as checking a box in your router's configuration. Unless it's a FRITZ!Box router, that is.
With this brand, you will also need to edit individual connections - in "Home Network" -> "Network" -> edit icon -> "Permit independent port sharing for this device". You might also want to enable "Always assign this network device the same IPv4 address", in case the setting is associated with IPs instead of MACs.
Monitor your Peer count
Note: As of
v1.7.0
, peer scoring has been fine-tuned. As such--max-peers
should not be set below 70. Note that Loweringmax-peers
does not significantly improve bandwidth usage, but does increase the risk of missed attestations.
If your Peer count is low (less than 15
) and/or you repeatedly see either of the following warnings:
Peer count low, no new peers discovered...
or
No peers for topic, skipping publish...
It means that Nimbus is unable to find a sufficient number of peers to guarantee stable operation, and you may miss attestations and blocks as a result.
Most commonly, this happens when your computer is not reachable from the outside and therefore won't be able to accept any incoming peer connections.
If you're on a home network, the fix here is to set up port forwarding (this may require you to pass the extip option and set enr-auto-update).
The first step however, is to check for incoming connections.
Check for incoming connections
To check if you have incoming connections set, run:
curl -s http://localhost:8008/metrics | grep libp2p_open_streams
In the output, look for a line that looks like:
libp2p_open_streams{type="ChronosStream",dir="in"}
If there are no dir=in
ChronosStreams , incoming connections are not working.
N.B you need to run the client with the
--metrics
option enabled in order for this to work
Pass the extip option
If you have a static public IP address, use the --nat:extip:$EXT_IP_ADDRESS
option to pass it to the client, where $EXT_IP_ADDRESS
is your public IP (see here for how to determine your public IP address). For example, if your public IP address is 1.2.3.4
, you'd run:
./run-prater-beacon-node.sh --nat:extip:1.2.3.4
Note that this should also work with a dynamic IP address. But you will probably also need to pass
enr-auto-update
as an option to the client.
Set ENR auto update
The --enr-auto-update
feature keeps your external IP address up to date based on information received from other peers on the network. This option is useful with ISPs that assign IP addresses dynamically.
In practice this means relaunching the beacon node with --enr-auto-update:true
(pass it as an option in the command line).
Set up port forwarding
If you're running on a home network and want to ensure you are able to receive incoming connections you may need to set up port forwarding (though some routers automagically set this up for you).
Note: If you are running your node on a virtual public server (VPS) instance, you can safely ignore this section.
While the specific steps required vary based on your router, they can be summarised as follows:
- Determine your public IP address
- Determine your private IP address
- Browse to the management website for your home router (http://192.168.1.1 for most routers, https://192.168.178.1 for FRITZ!Box)
- Log in as admin
- Find the section to configure port forwarding
- Configure a port forwarding rule with the following values:
- External port:
9000
- Internal port:
9000
- Protocol:
TCP
- IP Address: Private IP address of the computer running Nimbus
- Configure a second port forwarding rule with the following values:
- External port:
9000
- Internal port:
9000
- Protocol:
UDP
- IP Address: Private IP address of the computer running Nimbus
Determine your public IP address
To determine your public IP address, visit http://v4.ident.me/ or run this command:
curl v4.ident.me
Determine your private IP address
To determine your private IP address, run the appropriate command for your OS:
Linux:
ip addr show | grep "inet " | grep -v 127.0.0.1
Windows:
ipconfig | findstr /i "IPv4 Address"
macOS:
ifconfig | grep "inet " | grep -v 127.0.0.1
Check open ports on your connection
Use this tool to check your external (public) IP address and detect open ports on your connection (Nimbus TCP and UDP ports are both set to 9000
by default).
Reading the logs
No peers for topic, skipping publish...
This is printed when the client lacks quality peers to publish attestations to - this is the most important indication that the node is having trouble keeping up. If you see this, you are missing attestations.
Peer count low, no new peers discovered...
This is a sign that you may be missing attestations.
No external IP provided for the ENR...
This message basically means that the software did not manage to find a public IP address (by either looking at your routed interface IP address, and/or by attempting to get it from your gateway through UPnP or NAT-PMP).
Discovered new external address but ENR auto update is off...
It's possible that your ISP has changed your IP address without you knowing. The first thing to do it to try relaunching the beacon node with with --enr-auto-update:true
(pass it as an option in the command line).
If this doesn't fix the problem, the next thing to do is to check your external (public) IP address and detect open ports on your connection - you can use this site. Note that Nimbus TCP
and UDP
ports are both set to 9000
by default. See above for how to set up port forwarding.
Grafana and Prometheus
In this page we'll cover how to use Grafana and Prometheus to help you visualise important real-time metrics concerning your validator and/or beacon node.
Prometheus is an open-source systems monitoring and alerting toolkit. It runs as a service on your computer and its job is to capture metrics. You can find more information about Prometheus here.
Grafana is a tool for beautiful dashboard monitoring that works well with Prometheus. You can learn more about Grafana here.
Simple metrics
Run the beacon node with the --metrics
flag:
./run-prater-beacon-node.sh --metrics
And visit http://127.0.0.1:8008/metrics to see the raw metrics. You should see a plaintext page that looks something like this:
# HELP nim_runtime_info Nim runtime info
# TYPE nim_runtime_info gauge
nim_gc_mem_bytes 6275072.0
nim_gc_mem_occupied_bytes 1881384.0
nim_gc_heap_instance_occupied_bytes{type_name="KeyValuePairSeq[digest.Eth2Digest, block_pools_types.BlockRef]"} 25165856.0
nim_gc_heap_instance_occupied_bytes{type_name="BlockRef"} 17284608.0
nim_gc_heap_instance_occupied_bytes{type_name="string"} 6264507.0
nim_gc_heap_instance_occupied_bytes{type_name="seq[SelectorKey[asyncdispatch.AsyncData]]"} 409632.0
nim_gc_heap_instance_occupied_bytes{type_name="OrderedKeyValuePairSeq[Labels, seq[Metric]]"} 122720.0
nim_gc_heap_instance_occupied_bytes{type_name="Future[system.void]"} 79848.0
nim_gc_heap_instance_occupied_bytes{type_name="anon ref object from /Users/hackingresearch/nimbus/clone/nim-beacon-chain/vendor/nimbus-build-system/vendor/Nim/lib/pure/asyncmacro.nim(319, 33)"} 65664.0
nim_gc_heap_instance_occupied_bytes{type_name="anon ref object from /Users/hackingresearch/nimbus/clone/nim-beacon-chain/vendor/nimbus-build-system/vendor/Nim/lib/pure/asyncnet.nim(506, 11)"} 43776.0
nim_gc_heap_instance_occupied_bytes{type_name="seq[byte]"} 37236.0
nim_gc_heap_instance_occupied_bytes{type_name="seq[TrustedAttestation]"} 29728.0
...
Note: Metrics are by default only accessible from the same machine as the beacon node is running on - to fetch metrics from a remote machine, an SSH tunnel is recommended.
Unfortunately, this simple method only offers one snapshot in time (you'll need to keep refreshing to see the data update) which means it's impossible to see a useful history of the metrics. In short, it's far from optimal from an information design point of view.
In order to settle on a better solution, we'll need the help of two external projects -- Prometheus and Grafana.
Prometheus and Grafana
The following steps will take you through how to use Prometheus and Grafana to spin up a beautiful and useful monitoring dashboard for your validator and beacon node.
Steps
1. Download Prometheus
Use your favourite package manager to download Prometheus -- for example apt-get install prometheus
on Ubuntu, or brew install prometheus
on MacOS, should do the trick.
If you don't use a package manager, you can download the latest release of directly from Prometheus website. To extract it, run:
tar xvfz prometheus-*.tar.gz cd prometheus-*
2. Copy the binary
The Prometheus server is a single binary called prometheus (or prometheus.exe on Microsoft Windows). Copy it over to /usr/local/bin
cp prometheus-2.20.1.linux-amd64/prometheus /usr/local/bin/
3. Run Prometheus with the default configuration file
Prometheus relies on a YAML configuration file to let it know where, and how often, to scrape data.
Example config file:
global:
scrape_interval: 12s
scrape_configs:
- job_name: "nimbus"
static_configs:
- targets: ['127.0.0.1:8008']
Save the above as prometheus.yml
in the nimbus-eth2
repo.
Then run Prometheus:
prometheus --config.file=./prometheus.yml --storage.tsdb.path=./prometheus
You should see the following confirmation in the logs:
level=info ts=2021-01-22T14:52:10.604Z caller=main.go:673 msg="Server is ready to receive web requests."
4. Download Grafana
Download the latest release of Grafana for your platform. You need version 7.2 or newer.
Note: If you use a package manager, you can also download Grafana that way -- for example
apt-get install grafana
on Ubuntu, orbrew install grafana
on MacOS, should do the trick.
5. Install and start Grafana
Follow the instructions for your platform to install and start Grafana.
6. Configure login
Go to http://localhost:3000/, you should see a Grafana login screen that looks like this
Type in admin
for both the username and password. You'll be asked to change the password (we recommend you do so).
7. Add a data source
Hover your mouse over the gear icon in the left menu bar, and click on the Data Sources
option in the sub-menu that pops up.
Now click on the Add Data Source
button in the center of the screen
Select Prometheus
Enter http://localhost:9090
in the URL field
Set the "Scrape interval" field to the same value you used in the Prometheus config ("12" in our example above).
Scroll to the bottom and click on Save and Test
If everything is working correctly you should see a green Data source is working
box pop up
8. Import a dashboard
Now, let's import a dashboard; hover your mouse over the +
icon in the left menu bar and select import
from the pop-up menu
Click on Upload JSON file
Select the beacon_nodes_Grafana_dashboard.json
from the nimbus-eth2/grafana/
folder and click on Import
You'll be directed to the dashboard where you'll be able to gain insights into the performance of nimbus-eth2
and your validators
Note: the dashboard is very much a work in progress. Some of the highlights right now include received and proposed blocks, received and sent attestations, peers, memory and cpu usage stats. But keep an eye out for additional metrics in the near future.
And voila! That's all there is to it :)
Community dashboards
Joe Clapis
Joe – who’s done some brilliant work integrating Nimbus with Rocket Pool – has created a wonderful guide where he takes you through how to set up a Grafana server on your Pi – using his dashboard as an example.
In his words:
This captures just about every metric I think I’d like to see at a glance.
Whether or not you're running a Pi, we recommend you check out his guide here.
Metanull
A dashboard aimed primarily at users rather than developers.
Note that this dashboard does rely heavily on three prometheus exporter tools: node_exporter
for system metrics, json_exporter
for ETH price, and blackbox_exporter
for ping times.
The good news is that you don't need to use all these tools, as long as you take care of removing the related panels.
See here for a detailed guide explaining how to use it.
Enabling mobile alerts
Telegram
TODO
Supplying your own Infura endpoint
In a nutshell, Infura is a hosted ethereum node cluster that lets you make requests to the eth1 blockchain without requiring you to set up your own eth1 node.
While we do support Infura to process incoming validator deposits, we recommend running your own eth1 node to avoid relying on a third-party-service.
Note: Nimbus currently supports remote Infura nodes and local Geth nodes. In the future, we plan on having our own eth1 client -- Nimbus 1 -- be the recommended default.
1. Visit Infura.io
Go to:
and click on Get Started For Free
2. Sign up
Enter your email address and create a password
3. Verify email address
You should have received an email from Infura in your inbox. Open up it up and click on Confirm Email Address
4. Go to dashboard
This will take you to your Infura dashboard (https://infura.io/dashboard/)
5. Create your first project
Click on the first option (create your first project
) under Let's Get Started
Choose a name for your project
You'll be directed to the settings page of your newly created project
6. Select endpoint
⚠️ Warning: if you're connecting to mainnet, you should skip this step
If you're connecting to a testnet, in the KEYS
section, click on the dropdown menu to the right of ENDPOINTS
, and select GÖRLI
7. Copy one of the endpoints
You can use either endpoint but we recommend you copy the wss
8. Run the beacon node
Launch the beacon node on your favourite testnet, passing in your websocket endpoint as the Web3 provider URL.
9. Check stats
Visit your project's stats page to see a summary of your eth1 related activity and method calls
That's all there is to it :)
Migrate from another client
This guide will take you through the basics of how to migrate to Nimbus from another client. See here for advanced options.
The main pain point involves the exporting and importing of the slashing protection database, since each client takes a slightly different approach here.
The most important takeaway is that you ensure that two clients will never validate with the same keys at the same time. In other words, you must ensure that your original client is stopped, and no longer validating, before importing your keys into Nimbus.
Please take your time to get this right. Don't hesitate to reach out to us in the
#helpdesk
channel of our discord if you come across a stumbling block. We are more than happy to help guide you through the migration process. Given what's at stake, there is no such thing as a stupid question.
Step 1 - Sync the Nimbus beacon node
No matter which client you are migrating over from, the first step is to sync the Nimbus beacon node.
The easiest way to do this is to follow the beacon node quick start guide. Syncing the beacon node might take up to 30 hours depending on your hardware - you should keep validating using your current setup until it completes.
Once your Nimbus beacon node has synced and you're satisfied that it's working, move to Step 2.
Tip: See here for how to keep track of your syncing progress.
Alternatively, If you run the Nimbus beacon node with the
--rest
option enabled (e.g../run-mainnet-beacon-node.sh --rest
), you can obtain your node's syncing status by running:curl -X GET http://localhost:5052/eth/v1/node/syncing
Look for an
"is_syncing":false
in the response to confirm that your node has synced.
Step 2 - Stop your existing client and export your slashing protection history
From Prysm
1. Disable the Prysm validator client
Stop and disable the Prysm validator client (you can also stop the Prysm beacon node if you wish).
If you're using systemd and your service is called prysmvalidator
, run the following commands to stop and disable the service:
sudo systemctl stop prysmvalidator.service
sudo systemctl disable prysmvalidator.service
It's important that you disable the Prysm validator as well as stopping it, to prevent it from starting up again on reboot.
2. Export slashing protection history
Run the following to export your Prysm validator's slashing protection history:
prysm.sh validator slashing-protection-history export \
--datadir=/your/prysm/wallet \
--slashing-protection-export-dir=/path/to/export_dir
You will then find the slashing-protection.json
file in your specified /path/to/export_dir
folder.
Tip: To be extra sure that your validator has stopped, wait a few epochs and confirm that your validator has stopped attesting (check it's recent history on beaconcha.in). Then go to step 3.
From Lighthouse
1. Disable the Lighthouse validator client
The validator client needs to be stopped in order to export, to guarantee that the data exported is up to date.
If you're using systemd and your service is called lighthousevalidator
, run the following command to stop and disable the service:
sudo systemctl stop lighthousevalidator
sudo systemctl disable lighthousevalidator
You may also wish to stop the beacon node:
sudo systemctl stop lighthousebeacon
sudo systemctl disable lighthousebeacon
It's important that you disable the service as well as stopping it, to prevent it from starting up again on reboot.
2. Export slashing protection history
You can export Lighthouse's database with this command:
lighthouse account validator slashing-protection export slashing-protection.json
This will export your history in the correct format to slashing-protection.json
.
Tip: To be extra sure that your validator has stopped, wait a few epochs and confirm that your validator has stopped attesting (check it's recent history on beaconcha.in). Then go to step 3.
From Teku
1. Disable Teku
If you're using systemd and your service is called teku
, run the following command to stop and disable the service:
sudo systemctl stop teku
sudo systemctl disable teku
It's important that you disable the service as well as stopping it, to prevent it from starting up again on reboot.
2. Export slashing protection history
You can export Teku's database with this command:
teku slashing-protection export --data-path=/home/me/me_node --to=/home/slash/slashing-protection.json
Where:
--data-path
specifies the location of the Teku data directory.--to
specifies the file to export the slashing-protection data to (in this case/home/slash/slashing-protection.json
).
Tip: To be extra sure that your validator has stopped, wait a few epochs and confirm that your validator has stopped attesting (check it's recent history on beaconcha.in). Then go to step 3.
From Nimbus
1. Disable the Nimbus validator client
Once your Nimbus beacon node on your new setup has synced and you're satisfied that it's working, stop and disable the Nimbus validator client on your current setup.
If you're using systemd and your service is called nimbus-eth2-mainnet
, run the following commands to stop and disable the service:
sudo systemctl stop nimbus-eth2-mainnet.service
sudo systemctl disable nimbus-eth2-mainnet.service
It's important that you disable the service as well as stopping it, to prevent it from starting up again on reboot.
2. Export slashing protection history
Run the following to export your Nimbus validator's slashing protection history:
build/nimbus_beacon_node slashingdb export slashing-protection.json
This will export your history in the correct format to slashing-protection.json
.
Tip: To be extra sure that your validator has stopped, wait a few epochs and confirm that your validator has stopped attesting (check it's recent history on beaconcha.in). Then go to step 3.
Step 3 - Import your validator key(s) into Nimbus
To import you validator key(s), follow the instructions outlined here.
To check that your key(s) has been successfully imported, look for a file named after your public key in
build/data/shared_mainet_0/secrets/
.If you run into an error at this stage, it's probably because the wrong permissions have been set on either a folder or file. See here for how to fix this.
Step 4 - Import your slashing protection history
To import the slashing protection history you exported in step 2, from the nimbus-eth2
directory run:
build/nimbus_beacon_node slashingdb import path/to/export_dir/slashing-protection.json
Replacing /path/to/export_dir
with the file/directory you specified when you exported your slashing protection history.
Step 5 - Start the Nimbus validator
Follow the instructions here to start your validator using our pre-built binaries.
If you prefer to use Docker, see here
For a quick guide on how to set up a systemd service, see here
Final thoughts
If you are unsure of the safety of a step, please get in touch with us directly on discord. Additionally, we recommend testing the migration works correctly on a testnet before going ahead on mainnet.
Validate with a Raspberry Pi: Guide
I expect the new Raspberry Pi 4 (4GB RAM option, external SSD) to handle an Eth2 validator node without breaking a sweat. That's $100 of hardware running at 10 Watts to support a 32 ETH node (currently ~$10K stake).
— Justin Ðrake (@drakefjustin) June 24, 2019
In addition to this guide, we highly recommend this wonderful and complementary resource by community member Joe Clapis.
Introduction
This page will take you through how to use your laptop to program your Raspberry Pi, get Nimbus running, and connect to the Prater testnet.
One of the most important aspects of the Raspberry Pi experience is trying to make it as easy as possible to get started. As such, we try our best to explain things from first-principles.
Prerequisites
- Raspberry Pi 4 (4GB RAM option)
- 64GB microSD Card
- microSD USB adapter
- 5V 3A USB-C charger
- Reliable Wifi connection
- Laptop
- Basic understanding of the command line
- 160GB SSD
⚠️ You will need an SSD to run the Nimbus (without an SSD drive you have absolutely no chance of syncing the Ethereum blockchain). You have two options:
Use an USB portable SSD disk such as the Samsung T5 Portable SSD.
Use an USB 3.0 External Hard Drive Case with a SSD Disk. For example, Ethereum on Arm use an Inateck 2.5 Hard Drive Enclosure FE2011. Make sure to buy a case with an UASP compliant chip, particularly, one of these: JMicron (JMS567 or JMS578) or ASMedia (ASM1153E).
In both cases, avoid low quality SSD disks (the SSD is a key component of your node and can drastically affect both the performance and sync time). Keep in mind that you need to plug the disk to an USB 3.0 port (the blue port).
N.B If you have a Raspberry Pi 4 and are getting bad speeds transferring data to/from USB3.0 SSDs, please read this recommended fix.
1. Download Raspberry Pi Imager
Raspberry Pi Imager is a new imaging utility that makes it simple to manage your microSD card with Raspbian (the free Pi operating system based on Debian).
You can find the download link for your operating system here: Windows, macOS, Ubuntu.
2. Download Raspian 64-bit OS (Beta)
You can find the latest version, here.
3. Plug in SD card
Use your microSD to USB adapter to plug the SD card into your computer.
4. Download Raspberry Pi OS
Open Raspberry Pi Imager and click on CHOOSE OS
Scroll down and click on Use custom
Find the OS you downloaded in step 2
4b. Write to SD card
Click on CHOOSE SD CARD. You should see a menu pop-up with your SD card listed -- Select it
Click on WRITE
Click YES
Make a cup of coffee :)
5. Set up wireless LAN
Since you have loaded Raspberry Pi OS onto a blank SD card, you will have two partitions. The first one, which is the smaller one, is the boot
partition.
Create a wpa_supplicant
configuration file in the boot
partition with the following content:
# wpa_supplicant.conf
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=<Insert 2 letter ISO 3166-1 country code here>
network={
ssid="<Insert your Wifi network's name here>"
psk="<Insert your Wifi network's password here>"
}
Note: Don't forget to replace the placeholder
country
,ssid
, andpsk
values. See Wikipedia for a list of 2 letterISO 3166-1
country codes.
6. Enable SSH (using Linux or macOS)
You can access the command line of a Raspberry Pi remotely from another computer or device on the same network using SSH.
While SSH is not enabled by default, you can enable it by placing a file named ssh
, without any extension, onto the boot partition of the SD card.
When the Pi boots, it will look for the ssh
file. If it is found, SSH is enabled and the file is deleted. The content of the file does not matter; it can contain text, or nothing at all.
To create an empty ssh
file, from the home directory of the boot
partition file, run:
touch ssh
7. Find your Pi's IP address
Since Raspberry Pi OS supports Multicast_DNS out of the box, you can reach your Raspberry Pi by using its hostname and the .local
suffix.
The default hostname on a fresh Raspberry Pi OS install is raspberrypi
, so any Raspberry Pi running Raspberry Pi OS should respond to:
ping raspberrypi.local
The output should look more or less as follows:
PING raspberrypi.local (195.177.101.93): 56 data bytes
64 bytes from 195.177.101.93: icmp_seq=0 ttl=64 time=13.272 ms
64 bytes from 195.177.101.93: icmp_seq=1 ttl=64 time=16.773 ms
64 bytes from 195.177.101.93: icmp_seq=2 ttl=64 time=10.828 ms
...
Keep note of your Pi's IP address. In the above case, that's 195.177.101.93
8. SSH (using Linux or macOS)
Connect to your Pi by running:
ssh [email protected]
You'll be prompted to enter a password:
[email protected]'s password:
Enter the Pi's default password: raspberry
You should see a message that looks like the following:
Linux raspberrypi 5.4.51-v8+ #1333 SMP PREEMPT Mon Aug 10 16:58:35 BST 2020 aarch64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Aug 20 12:59:01 2020
SSH is enabled and the default password for the 'pi' user has not been changed.
This is a security risk - please login as the 'pi' user and type 'passwd' to set a new password.
Followed by a command-line prompt indicating a successful connection:
[email protected]:~ $
9. Increase swap size to 2GB
The first step is to increase the swap size to 2GB (2048MB).
Note: Swap acts as a breather to your system when the RAM is exhausted. When the RAM is exhausted, your Linux system uses part of the hard disk memory and allocates it to the running application.
Use the Pi's built-in text editor nano to open up the swap file:
sudo nano /etc/dphys-swapfile
Change the value assigned to CONF_SWAPSIZE
from 100
to 2048
:
...
# set size to absolute value, leaving empty (default) then uses computed value
# you most likely don't want this, unless you have an special disk situation
CONF_SWAPSIZE=2048
...
Save (Ctrl+S
) and exit (Ctrl+X
).
10. Reboot
Reboot your Pi to have the above changes take effect:
sudo reboot
This will cause your connection to close. So you'll need to ssh
into your Pi again:
ssh [email protected]
Note: Remember to replace
195.177.101.93
with the IP address of your Pi.
10b. Boot from external SSD
Follow this guide to copy the contents of your SD card over to your SSD, and boot your Pi from your SSD.
Tips:
Make sure you connect your SSD the Pi's USB 3 port (the blue port).
If your Pi is headless (no monitor attached) you can use the
rpi-clone
repository to copy the contents of the SD over to the SSD; in a nutshell, replace steps 14 and 15 of the above guide with the following commands (which you should run from the Pi'shome
directory):git clone https://github.com/billw2/rpi-clone.git cd rpi-clone sudo cp rpi-clone rpi-clone-setup /usr/local/sbin sudo rpi-clone-setup -t testhostname rpi-clone sda
For more on
raspi-config
, see here.To shutdown your Pi safely, run
sudo shutdown -h now
Once you're done, ssh
back into your Pi.
11. Install the beacon node
Open the Nimbus eth2 releases page and copy the link for the file that starts with nimbus-eth2_Linux_arm64v8
.
Run this in your home directory to download nimbus-eth2:
mkdir nimbus-eth2
wget <insert download link here>
tar -xzf nimbus-eth2_Linux_arm64v8*.tar.gz -C nimbus-eth2
rm nimbus-eth2_Linux_arm64v8*.tar.gz
Now you can find the software in the nimbus-eth2 directory.
12. Copy signing key over to Pi
Note: If you haven't generated your validator key(s) and/or made your deposit yet, follow the instructions on this page before carrying on.
We'll use the scp
command to send files over SSH. It allows you to copy files between computers, say from your Raspberry Pi to your desktop/laptop, or vice-versa.
Copy the folder containing your validator key(s) from your computer to your pi
's homefolder by opening up a new terminal window and running the following command:
scp -r <VALIDATOR_KEYS_DIRECTORY> [email protected]:
Note: Don't forget the colon (:) at the end of the command!
As usual, replace 195.177.101.93
with your Pi's IP address, and <VALIDATOR_KEYS_DIRECTORY>
with the full pathname of your validator_keys
directory (if you used the Launchpad command line app this would have been created for you when you generated your keys).
Tip: run
pwd
in yourvalidator_keys
directory to print the full pathname to the console.
13. Import signing key into Nimbus
To import your signing key into Nimbus, from the nimbus-eth2
directory run:
build/nimbus_beacon_node deposits import --data-dir=build/data/shared_prater_0 ../validator_keys
You'll be asked to enter the password you created to encrypt your keystore(s). Don't worry, this is entirely normal. Your validator client needs both your signing keystore(s) and the password encrypting it to import your key (since it needs to decrypt the keystore in order to be able to use it to sign on your behalf).
14. Connect to Prater
We're finally ready to connect to the Prater testnet!
Note: If you haven't already, we recommend registering for, and running, your own eth1 node in parallel. For instruction on how to do so, see this page.
To connect to Prater, run:
./run-prater-beacon-node.sh
You'll be prompted to enter a web3-provider url:
To monitor the Eth1 validator deposit contract, you'll need to pair
the Nimbus beacon node with a Web3 provider capable of serving Eth1
event logs. This could be a locally running Eth1 client such as Geth
or a cloud service such as Infura. For more information please see
our setup guide:
https://status-im.github.io/nimbus-eth2/eth1.html
Please enter a Web3 provider URL:
Enter your web3 endpoint.
15. Check for successful connection
If you look near the top of the logs printed to your console, you should see confirmation that your beacon node has started, with your local validator attached:
INF 2020-12-01 11:25:33.487+01:00 Launching beacon node
...
INF 2020-12-01 11:25:34.556+01:00 Loading block dag from database topics="beacnde" tid=19985314 file=nimbus_beacon_node.nim:198 path=build/data/shared_prater_0/db
INF 2020-12-01 11:25:35.921+01:00 Block dag initialized
INF 2020-12-01 11:25:37.073+01:00 Generating new networking key
...
NOT 2020-12-01 11:25:45.267+00:00 Local validator attached tid=22009 file=validator_pool.nim:33 pubkey=95e3cbe88c71ab2d0e3053b7b12ead329a37e9fb8358bdb4e56251993ab68e46b9f9fa61035fe4cf2abf4c07dfad6c45 validator=95e3cbe8
...
NOT 2020-12-01 11:25:59.512+00:00 Eth1 sync progress topics="eth1" tid=21914 file=eth1_monitor.nim:705 blockNumber=3836397 depositsProcessed=106147
NOT 2020-12-01 11:26:02.574+00:00 Eth1 sync progress topics="eth1" tid=21914 file=eth1_monitor.nim:705 blockNumber=3841412 depositsProcessed=106391
...
INF 2020-12-01 11:26:31.000+00:00 Slot start topics="beacnde" tid=21815 file=nimbus_beacon_node.nim:505 lastSlot=96566 scheduledSlot=96567 beaconTime=1w6d9h53m24s944us774ns peers=7 head=b54486c4:96563 headEpoch=3017 finalized=2f5d12e4:96479 finalizedEpoch=3014
INF 2020-12-01 11:26:36.285+00:00 Slot end topics="beacnde" tid=21815 file=nimbus_beacon_node.nim:593 slot=96567 nextSlot=96568 head=b54486c4:96563 headEpoch=3017 finalizedHead=2f5d12e4:96479 finalizedEpoch=3014
To keep track of your syncing progress, have a look at the output at the very bottom of the terminal window in which your validator is running. You should see something like:
peers: 15 ❯ finalized: ada7228a:8765 ❯ head: b2fe11cd:8767:2 ❯ time: 9900:7 (316807) ❯ sync: wPwwwwwDwwDPwPPPwwww:7:1.2313:1.0627:12h01m(280512)
Keep an eye on the number of peers your currently connected to (in the above case that's 15
), as well as your sync progress.
Note: 15 - 20 peers and an average sync speed of 0.5 - 1.0 blocks per second is normal on
Prater
with a Pi. If you're sync speed is much slower than this, the root of the problem may be your USB3.0 to SSD adapter. See this post for a recommended workaround.
Mainnet advice
Whether or not your Pi is up to the task will depend on a number of factors such as SSD speed, network connectivity, etc. As such, it's best to verify performance on a testnet first.
The best thing you can do is to set your Pi to run Prater. If you have no trouble syncing and attesting on Prater, your setup should be more than good enough for mainnet as well (Mainnet is expected to use fewer resources).
We've been running lots of PIs and NanoPCs 24/7 for 3 years and never got a hardware fail. It is easy (and cheap) to get redundancy of components (even spare PIs in different locations, more of this to come).
— Ethereum on ARM (@EthereumOnARM) November 28, 2020
Although we don't expect a modern Pi to fail, we recommend buying a spare Pi, and enterprise grade SSD, on the off-chance it does; keep your original SD around, to make it easy for you to copy the image over.
Systemd
Now that you have Nimbus up and running, we recommend setting up a systemd service with an autorestart on boot (should you experience an unexpected power outage, this will ensure your validator restarts correctly).
Systemd will also ensure your validator keeps running when you exit your ssh session (Ctrl-C
) and/or switch off your laptop.
For the details on how to do this, see this page.
Overclocking
While you shouldn't need to, if you're feeling adventurous and want to try and squeeze out some extra performance out of your Pi's CPU, see this guide by Joe Clapis.
Note: we have since improved performance in several ways which should make a vanilla Pi perform well. However, overclocking may still give some benefits, in particular you have more performance to deal with anomalies (like spamming etc).
Nimbus binaries
We currently have binaries available for Linux AMD64
, ARM
and ARM64
, Windows AMD64
and macOS (AMD64
and ARM64
).
You can find the latest release here: https://github.com/status-im/nimbus-eth2/releases
Scroll to the bottom of the first (non-nightly) release you see, and click on Assets
. You should see a list that looks like the following:
Click on the tar.gz
file that corresponds to your OS and architecture, unpack the archive, read the README and run the binary directly (or through one of our provided wrapper scripts).
We've designed the build process to be reproducible. In practice, this means that anyone can verify that these exact binaries were produced from the corresponding source code commits. For more about the philosophy and importance of this feature see reproducible-builds.org.
For instructions on how to reproduce those binaries, see "README.md" inside the archive.
Docker images
Docker images for end-users are generated and published automatically to Docker Hub from the Nimbus-eth2 CI, by a GitHub action, whenever a new release is tagged in Git.
We have version-specific Docker tags (statusim/nimbus-eth2:amd64-v1.2.3
) and a tag for the latest image (statusim/nimbus-eth2:amd64-latest
).
These images are simply the contents of release tarballs inside a debian:bullseye-slim
image, running under a user imaginatively named user
, with UID:GID of 1000:1000.
The unpacked archive is in /home/user/nimbus-eth2
which is also the default WORKDIR. The default ENTRYPOINT is the binary itself: /home/user/nimbus-eth2/build/nimbus\_beacon\_node
Usage
You need to create an external data directory and mount it as a volume inside the container, with mounting point: /home/user/nimbus-eth2/build/data
mkdir data
docker run -it --rm -v ${PWD}/data:/home/user/nimbus-eth2/build/data statusim/nimbus-eth2:amd64-latest [nimbus_beacon_node args here]
Wrapper script
If you wish, you can choose to use a wrapper script instead:
mkdir data
docker run -it --rm -v ${PWD}/data:/home/user/nimbus-eth2/build/data -e WEB3_URL="wss://mainnet.infura.io/ws/v3/YOUR_TOKEN" --entrypoint /home/user/nimbus-eth2/run-mainnet-beacon-node.sh statusim/nimbus-eth2:amd64-latest [nimbus_beacon_node args here]
Docker compose
Our preferred setup is using docker-compose
. You can use one of our example configuration files as a base for your own custom configuration:
mkdir data
docker-compose -f docker-compose-example1.yml up --quiet-pull --no-color --detach
Note: The rather voluminous logging is done on
stdout
, so you might want to change the system-wide Docker logging defaults (which dumps everything in/var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log
) to something likesyslog
. We recommend using a log rotation system with appropriate intervals for logs of this size.
JSON-RPC API
The Nimbus JSON-RPC API has been deprecated and it’s scheduled for removal in version v22.6 of Nimbus (to be released in June 2022). If you are currently relying on the JSON-RPC API, please consider switching to the official REST API.
The JSON-RPC API
is a collection of APIs for querying the state of the application at runtime.
The API is based on an early version of the common beacon APIs with the exception that JSON-RPC
is used instead of http REST
(the method names, parameters and results are all the same except for the encoding / access method).
The JSON-RPC API
should not be exposed to the public internet.
Introduction
The nimbus-eth2
API is implemented using JSON-RPC 2.0. To query it, you can use a JSON-RPC library in the language of your choice, or a tool like curl
to access it from the command line. A tool like jq is helpful to pretty-print the responses.
curl -d '{"jsonrpc":"2.0","id":"id","method":"peers","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq
Before you can access the API, make sure it's enabled using the RPC flag (nimbus_beacon_node --rpc
):
--rpc Enable the JSON-RPC server.
--rpc-port HTTP port for the JSON-RPC service.
--rpc-address Listening address of the RPC server.
One difference is that currently endpoints that correspond to specific ones from the spec are named weirdly - for example an endpoint such as getGenesis
is currently named get_v1_beacon_genesis
which would map 1:1 to the actual REST path in the future - verbose but unambiguous.
Beacon chain API
get_v1_beacon_genesis
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_genesis","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/genesis -s | jq
get_v1_beacon_states_root
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_root","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/states/finalized/root -s | jq
get_v1_beacon_states_fork
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_fork","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/states/finalized/fork -s | jq
get_v1_beacon_states_finality_checkpoints
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_finality_checkpoints","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/states/finalized/finality_checkpoints -s | jq
get_v1_beacon_states_stateId_validators
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_stateId_validators","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/states/finalized/validators -s | jq
get_v1_beacon_states_stateId_validators_validatorId
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_stateId_validators_validatorId","params":["finalized", "100167"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/states/finalized/validators/100167 -s | jq
get_v1_beacon_states_stateId_validator_balances
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_stateId_validator_balances","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/states/finalized/validator_balances -s | jq
get_v1_beacon_states_stateId_committees_epoch
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_stateId_committees_epoch","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/states/finalized/committees -s | jq
get_v1_beacon_headers
get_v1_beacon_headers_blockId
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_headers_blockId","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/headers/finalized -s | jq
post_v1_beacon_blocks
curl -d '{"jsonrpc":"2.0","method":"post_v1_beacon_blocks","params":[{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body":{"randao_reveal":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","eth1_data":{"deposit_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","deposit_count":"1","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"graffiti":"string","proposer_slashings":[{"signed_header_1":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signed_header_2":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}],"attester_slashings":[{"attestation_1":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"attestation_2":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}}],"attestations":[{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}],"deposits":[{"proof":["0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"data":{"pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a","withdrawal_credentials":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","amount":"1","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}],"voluntary_exits":[{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST -d '{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body":{"randao_reveal":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","eth1_data":{"deposit_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","deposit_count":"1","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"graffiti":"string","proposer_slashings":[{"signed_header_1":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signed_header_2":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}],"attester_slashings":[{"attestation_1":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"attestation_2":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}}],"attestations":[{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}],"deposits":[{"proof":["0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"data":{"pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a","withdrawal_credentials":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","amount":"1","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}],"voluntary_exits":[{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}' -H 'Content-Type: application/json' http://localhost:5052/eth/v1/beacon/blocks -s | jq
get_v1_beacon_blocks_blockId
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_blocks_blockId","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v2/beacon/blocks/finalized -s | jq
get_v1_beacon_blocks_blockId_root
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_blocks_blockId_root","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/blocks/finalized/root -s | jq
get_v1_beacon_blocks_blockId_attestations
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_blocks_blockId_attestations","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/blocks/finalized/attestations -s | jq
post_v1_beacon_pool_attestations
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_blocks_blockId_attestations","params":[{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST -d '[{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}]' -H 'Content-Type: application/json' http://localhost:5052/eth/v1/beacon/pool/attestations -s | jq
get_v1_beacon_pool_attester_slashings
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_pool_attester_slashings","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/pool/attester_slashings -s | jq
post_v1_beacon_pool_attester_slashings
curl -d '{"jsonrpc":"2.0","method":"post_v1_beacon_pool_attester_slashings","params":[{"attestation_1":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"attestation_2":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST -d '{"attestation_1":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"attestation_2":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}}' -H 'Content-Type: application/json' http://localhost:5052/eth/v1/beacon/pool/attester_slashings -s | jq
get_v1_beacon_pool_proposer_slashings
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_pool_proposer_slashings","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/pool/proposer_slashings -s | jq
post_v1_beacon_pool_proposer_slashings
curl -d '{"jsonrpc":"2.0","method":"post_v1_beacon_pool_proposer_slashings","params":[{"signed_header_1":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signed_header_2":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST -d '{"signed_header_1":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signed_header_2":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}' -H 'Content-Type: application/json' http://localhost:5052/eth/v1/beacon/pool/proposer_slashings -s | jq
get_v1_beacon_pool_voluntary_exits
curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_pool_voluntary_exits","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/beacon/pool/voluntary_exits -s | jq
post_v1_beacon_pool_voluntary_exits
curl -d '{"jsonrpc":"2.0","method":"post_v1_beacon_pool_voluntary_exits","params":[{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST -d '{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}' -H 'Content-Type: application/json' http://localhost:5052/eth/v1/beacon/pool/voluntary_exits -s | jq
Beacon Node API
get_v1_node_identity
curl -d '{"jsonrpc":"2.0","method":"get_v1_node_identity","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/node/identity -s | jq
get_v1_node_peers
curl -d '{"jsonrpc":"2.0","method":"get_v1_node_peers","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/node/peers -s | jq
get_v1_node_peers_peerId
curl -d '{"jsonrpc":"2.0","method":"get_v1_node_peers_peerId","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/node/peer/QmYyQSo1c1Ym7orWxLYvCrM2EmxFTANf8wXmmE7DWjhx5N -s | jq
get_v1_node_peer_count
curl -d '{"jsonrpc":"2.0","method":"get_v1_node_peer_count","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/node/peer_count -s | jq
get_v1_node_version
curl -d '{"jsonrpc":"2.0","method":"get_v1_node_version","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/node/version -s | jq
get_v1_node_syncing
curl -d '{"jsonrpc":"2.0","method":"get_v1_node_syncing","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/node/syncing -s | jq
get_v1_node_health
curl -d '{"jsonrpc":"2.0","method":"get_v1_node_health","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/node/health -s -w "%{http_code}"
Valdiator API
get_v1_validator_duties_attester
curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_duties_attester","params":[1,["a7a0502eae26043d1ac39a39457a6cdf68fae2055d89c7dc59092c25911e4ee55c4e7a31ade61c39480110a393be28e8","a1826dd94cd96c48a81102d316a2af4960d19ca0b574ae5695f2d39a88685a43997cef9a5c26ad911847674d20c46b75"]],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST http://localhost:5052/eth/v1/validator/duties/attester/1 -H 'Content-Type: application/json' -d '["a7a0502eae26043d1ac39a39457a6cdf68fae2055d89c7dc59092c25911e4ee55c4e7a31ade61c39480110a393be28e8"]' -s | jq
get_v1_validator_duties_proposer
curl -d '{"jsonrpc":"2.0","id":"id","method":"get_v1_validator_duties_proposer","params":[1] }' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/validator/duties/proposer/1 -s | jq
get_v1_validator_block
curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_block","params":[1,"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","0x4e696d6275732f76312e302e322d64333032633164382d73746174656f667573"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/validator/blocks/1?randao_reveal=0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505&graffiti=0x4e696d6275732f76312e302e322d64333032633164382d73746174656f667573 -s | jq
get_v1_validator_attestation_data
curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_attestation_data","params":[1, 1],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/validator/attestation_data?slot=1&committee_index=1 -s | jq
get_v1_validator_aggregate_attestation
curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_aggregate_attestation","params":[1, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/validator/aggregate_attestation?slot=1&attestation_data_root=0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2 -s | jq
post_v1_validator_aggregate_and_proofs
curl -d '{"jsonrpc":"2.0","method":"post_v1_validator_aggregate_and_proofs","params":[{"message":{"aggregator_index":"1","aggregate":{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"selection_proof":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST http://localhost:5052/eth/v1/validator/aggregate_and_proofs -H 'Content-Type: application/json' -d '[{"message":{"aggregator_index":"1","aggregate":{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"selection_proof":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]' -s | jq
post_v1_validator_beacon_committee_subscriptions
Config API
get_v1_config_fork_schedule
curl -d '{"jsonrpc":"2.0","method":"get_v1_config_fork_schedule","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/config/fork_schedule -s | jq
get_v1_config_spec
curl -d '{"jsonrpc":"2.0","method":"get_v1_config_spec","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/config/spec -s | jq
get_v1_config_deposit_contract
curl -d '{"jsonrpc":"2.0","method":"get_v1_config_deposit_contract","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/config/deposit_contract -s | jq
Administrative / Debug API
get_v1_debug_beacon_states_stateId
curl -d '{"jsonrpc":"2.0","method":"get_v1_debug_beacon_states_stateId","params":["head"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v2/debug/beacon/states/head -s | jq
get_v1_debug_beacon_heads
Equivalent call in the official REST API:
curl http://localhost:5052/eth/v1/debug/beacon/heads -s | jq
Nimbus extensions
getBeaconHead
The latest head slot, as chosen by the latest fork choice.
curl -d '{"jsonrpc":"2.0","id":"id","method":"getBeaconHead","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/beacon/head -s | jq
getChainHead
Show chain head information, including head, justified and finalized checkpoints.
curl -d '{"jsonrpc":"2.0","id":"id","method":"getChainHead","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/chain/head -s | jq
getNodeVersion
curl -d '{"jsonrpc":"2.0","method":"getNodeVersion","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/node/version -s | jq
peers
Show a list of peers in PeerPool.
curl -d '{"jsonrpc":"2.0","method":"peers","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/network/peers -s | jq
getSyncing
Shows current state of forward syncing manager.
curl -d '{"jsonrpc":"2.0","method":"getSyncing","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/syncmanager/status -s | jq
getNetworkPeerId
Shows current node's libp2p peer identifier (PeerID).
curl -d '{"jsonrpc":"2.0","method":"getNetworkPeerId","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
getNetworkPeers
Shows list of available PeerIDs in PeerPool.
curl -d '{"jsonrpc":"2.0","method":"getNetworkPeers","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/network/peers -s | jq
getNetworkEnr
setLogLevel
Set the current logging level dynamically: TRACE, DEBUG, INFO, NOTICE, WARN, ERROR or FATAL
curl -d '{"jsonrpc":"2.0","id":"id","method":"setLogLevel","params":["DEBUG; TRACE:discv5,libp2p; REQUIRED:none; DISABLED:none"] }' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST http://localhost:5052/nimbus/v1/chronicles/settings -d "DEBUG; TRACE:discv5,libp2p; REQUIRED:none; DISABLED:none" -s | jq
setGraffiti
Set the graffiti bytes that will be included in proposed blocks. The graffiti bytes can be specified as an UTF-8 encoded string or as an 0x-prefixed hex string specifying raw bytes.
curl -d '{"jsonrpc":"2.0","id":"id","method":"setGraffiti","params":["Mr F was here"] }' -H 'Content-Type: application/json' localhost:9190 -s | jq
Equivalent call in the official REST API:
curl -X POST http://localhost:5052/nimbus/v1/graffiti -d "Mr F was here" -s | jq
getEth1Chain
Get the list of Eth1 blocks that the beacon node is currently storing in memory.
curl -d '{"jsonrpc":"2.0","id":"id","method":"getEth1Chain","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq '.result'
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/eth1/chain -s | jq
getEth1ProposalData
Inspect the eth1 data that the beacon node would produce if it was tasked to produce a block for the current slot.
curl -d '{"jsonrpc":"2.0","id":"id","method":"getEth1ProposalData","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq '.result'
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/eth1/proposal_data -s | jq
debug_getChronosFutures
Get the current list of live async futures in the process - compile with -d:chronosFutureTracking
to enable.
curl -d '{"jsonrpc":"2.0","id":"id","method":"debug_getChronosFutures","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq '.result | (.[0] | keys_unsorted) as $keys | $keys, map([.[ $keys[] ]])[] | @csv'
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/debug/chronos/futures -s | jq
debug_getGossipSubPeers
Get the current list of live async futures in the process - compile with -d:chronosFutureTracking
to enable.
curl -d '{"jsonrpc":"2.0","id":"id","method":"debug_getGossipSubPeers","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq '.result'
Equivalent call in the official REST API:
curl http://localhost:5052/nimbus/v1/debug/gossip/peers -s | jq
REST API
Nimbus exposes an extremely fast implementation of the standard Beacon Node API. The API allows you to use Nimbus together with third-party tooling such as validator clients, block explorers as well as your own monitoring infrastructure.
The API is a REST
interface accessed via HTTP
. The API should not be exposed to the public Internet unless protected by additional security: it includes multiple endpoints which could open your node to denial-of-service (DoS) attacks.
Warning: If you choose to run a public endpoint, do not use that same node for validation duties -- the load of the public REST endpoint is enough to interfere with your validator duties. Additionally, if you're running validators on your beacon node, and using the same instance for historical data queries (>2 epochs old), this may also interfere with your duties.
Test your tooling against our servers
The API is available from:
http://testing.mainnet.beacon-api.nimbus.team/
http://unstable.mainnet.beacon-api.nimbus.team/
http://unstable.prater.beacon-api.nimbus.team/
You can make requests as follows (here we are requesting the version the Nimbus software version of the node in question):
Mainnet testing branch
curl -X GET http://testing.mainnet.beacon-api.nimbus.team/eth/v1/node/version
Mainnet unstable branch
curl -X GET http://unstable.mainnet.beacon-api.nimbus.team/eth/v1/node/version
Prater unstable branch
curl -X GET http://unstable.prater.beacon-api.nimbus.team/eth/v1/node/version
The test endpoints are part of pre-release testing and run an unstable version of Nimbus - we welcome reports about any problems you might have with them.
They may also be unresponsive at times - so please do not rely on them for validation. We may also disable them at any time without warning.
Configure your node to run a local REST server
By default, the REST interface is disabled. To enable it, start the beacon node with the --rest
option:
./run-mainnet-beacon-node.sh --rest
Then access the API from http://localhost:5052/
. For example, to get the version of the Nimbus software your node is running:
curl -X GET http://localhost:5052/eth/v1/node/version
By default, only connections from the same machine are entertained. The port and listening address can be further configured through the options --rest-port
and --rest-address
.
Warning: If you are using a validator client with a Nimbus beacon node, and running a Nimbus version prior to
v1.5.5
, then you will need to launch the node with the--subscribe-all-subnets
option enabled (in addition to the--rest
option).
Some useful commands
Standard endpoints
While these are all well documented in the official docs here are a handful of simple examples to get you started:
Genesis
Retrieve details of the chain's genesis which can be used to identify chain.
With our mainnet testing server
curl -X GET http://unstable.mainnet.beacon-api.nimbus.team/eth/v1/beacon/genesis
With your own local server
curl -X GET http://localhost:5052/eth/v1/beacon/genesis
Deposit contract
Get deposit contract address (retrieve Eth1 deposit contract address and chain ID).
With our mainnet testing server
curl -X GET http://unstable.mainnet.beacon-api.nimbus.team/eth/v1/config/deposit_contract
With your own local server
curl -X GET http://localhost:5052/eth/v1/config/deposit_contract
Peer count
Get peer count
With our mainnet testing server
curl -X GET http://unstable.mainnet.beacon-api.nimbus.team/eth/v1/node/peer_count
With your own local server
curl -X GET http://localhost:5052/eth/v1/node/peer_count
Syncing status
Get node syncing status (requests the beacon node to describe if it's currently syncing or not, and if it is, what block it is up to)
With our mainnet testing server
curl -X GET http://unstable.mainnet.beacon-api.nimbus.team/eth/v1/node/syncing
With your own local server
curl -X GET http://localhost:5052/eth/v1/node/syncing
Fork schedule
Get scheduled upcoming forks (retrieve all forks, past present and future, of which this node is aware)
With our mainnet testing server
curl -X GET http://unstable.mainnet.beacon-api.nimbus.team/eth/v1/config/fork_schedule
With your own local server
curl -X GET http://localhost:5052/eth/v1/config/fork_schedule
Nimbus specific endpoints
In addition to supporting the standard endpoints, Nimbus has a set of specific endpoints which augment the standard API.
Check Graffiti String
With our mainnet testing server
curl -X GET http://testing.mainnet.beacon-api.nimbus.team/nimbus/v1/graffiti
With your own local server
curl -X GET http://localhost:5052/nimbus/v1/graffiti
Set Graffiti String
With your own local server
curl -X POST http://localhost:5052/nimbus/v1/graffiti -H "Content-Type: text/plain" -d "new graffiti"
Set Log Level
TBA
Specification
Keymanager API
⚠️ This feature is currently in BETA - we are still testing it and implementation details may change in response to community feedback. We strongly advise against using it on mainnet - your validators may get slashed
The standardized Keymanager API can be used to add, remove, or migrate validators on the fly while the beacon node is running.
As of v1.7.0
it supports web3signer
keystores.
Configuration
By default, we disable the Keymanager API. To enable it, start the beacon node with the --keymanager
option enabled:
./run-prater-beacon-node.sh --keymanager
Once the node is running, you'll be able to access the API from http://localhost:5052/
Authorization: Bearer scheme
All requests must be authorized through the Authorization: Bearer
scheme with a token matching the contents of a file provided at the start of the node through the --keymanager-token-file
parameter.
Enabling connections from outside machines
By default, only connections from the same machine are entertained. If you wish to change this you can configure the port and listening address with the --keymanager-port
and --keymanager-address
options respectively.
⚠️ The Keymanager API port should only be exposed through a secure channel (e.g. HTTPS, an SSH tunnel, a VPN, etc.)
Specification
The specification is documented here. The README is also extremely useful and is documented here.
Command line options
You can pass any nimbus_beacon_node
options to the prater
and mainnet
scripts. For example, if you want to launch Nimbus on mainnet with different base ports than the default 9000/udp
and 9000/tcp
, say 9100/udp
and 9100/tcp
, run:
./run-mainnet-beacon-node.sh --tcp-port=9100 --udp-port=9100
To see a list of the command line options availabe to you, with descriptions, run:
build/./nimbus_beacon_node --help
You should see the following output:
Usage:
nimbus_beacon_node [OPTIONS]... command
The following options are available:
--config-file Loads the configuration from a TOML file.
--log-level Sets the log level for process and topics (e.g. "DEBUG; TRACE:discv5,libp2p;
REQUIRED:none; DISABLED:none") [=INFO].
--log-file Specifies a path for the written Json log file (deprecated).
--network The Eth2 network to join [=mainnet].
-d, --data-dir The directory where nimbus will store all blockchain data.
--validators-dir A directory containing validator keystores.
--secrets-dir A directory containing validator keystore passwords.
--wallets-dir A directory containing wallet files.
--web3-url One or more Web3 provider URLs used for obtaining deposit contract data.
--non-interactive Do not display interative prompts. Quit on missing configuration.
--netkey-file Source of network (secp256k1) private key file (random|<path>) [=random].
--insecure-netkey-password Use pre-generated INSECURE password for network private key file [=false].
--agent-string Node agent string which is used as identifier in network [=nimbus].
--subscribe-all-subnets Subscribe to all subnet topics when gossiping [=false].
--num-threads Number of worker threads ("0" = use as many threads as there are CPU cores
available) [=0].
-b, --bootstrap-node Specifies one or more bootstrap nodes to use when connecting to the network.
--bootstrap-file Specifies a line-delimited file of bootstrap Ethereum network addresses.
--listen-address Listening address for the Ethereum LibP2P and Discovery v5 traffic [=0.0.0.0].
--tcp-port Listening TCP port for Ethereum LibP2P traffic [=9000].
--udp-port Listening UDP port for node discovery [=9000].
--max-peers The maximum number of peers to connect to [=160].
--nat Specify method to use for determining public address. Must be one of: any, none,
upnp, pmp, extip:<IP> [=any].
--enr-auto-update Discovery can automatically update its ENR with the IP address and UDP port as
seen by other nodes it communicates with. This option allows to enable/disable
this functionality [=false].
--weak-subjectivity-checkpoint Weak subjectivity checkpoint in the format block_root:epoch_number.
--finalized-checkpoint-state SSZ file specifying a recent finalized state.
--finalized-checkpoint-block SSZ file specifying a recent finalized block.
--node-name A name for this node that will appear in the logs. If you set this to 'auto', a
persistent automatically generated ID will be selected for each --data-dir
folder.
--graffiti The graffiti value that will appear in proposed blocks. You can use a
0x-prefixed hex encoded string to specify raw bytes.
--metrics Enable the metrics server [=false].
--metrics-address Listening address of the metrics server [=127.0.0.1].
--metrics-port Listening HTTP port of the metrics server [=8008].
--status-bar Display a status bar at the bottom of the terminal screen [=true].
--status-bar-contents Textual template for the contents of the status bar.
--rpc Enable the JSON-RPC server (deprecated) [=false].
--rpc-port HTTP port for the JSON-RPC service [=9190].
--rpc-address Listening address of the RPC server [=127.0.0.1].
--rest Enable the REST server [=false].
--rest-port Port for the REST server [=5052].
--rest-address Listening address of the REST server [=127.0.0.1].
--rest-allow-origin Limit the access to the REST API to a particular hostname (for CORS-enabled
clients such as browsers).
--rest-statecache-size The maximum number of recently accessed states that are kept in memory. Speeds
up requests obtaining information for consecutive slots or epochs. [=3].
--rest-statecache-ttl The number of seconds to keep recently accessed states in memory [=60].
--rest-request-timeout The number of seconds to wait until complete REST request will be received
[=infinite].
--rest-max-body-size Maximum size of REST request body (kilobytes) [=16384].
--rest-max-headers-size Maximum size of REST request headers (kilobytes) [=64].
--keymanager Enable the REST keymanager API (BETA version) [=false].
--keymanager-port Listening port for the REST keymanager API [=5052].
--keymanager-address Listening port for the REST keymanager API [=127.0.0.1].
--keymanager-allow-origin Limit the access to the Keymanager API to a particular hostname (for
CORS-enabled clients such as browsers).
--keymanager-token-file A file specifying the authorization token required for accessing the keymanager
API.
--in-process-validators Disable the push model (the beacon node tells a signing process with the private
keys of the validators what to sign and when) and load the validators in the
beacon node itself [=true].
--discv5 Enable Discovery v5 [=true].
--dump Write SSZ dumps of blocks, attestations and states to data dir [=false].
--direct-peer The list of priviledged, secure and known peers to connect and maintain the
connection to, this requires a not random netkey-file. In the complete
multiaddress format like: /ip4/<address>/tcp/<port>/p2p/<peerId-public-key>.
Peering agreements are established out of band and must be reciprocal..
--doppelganger-detection If enabled, the beacon node prudently listens for 2 epochs for attestations from
a validator with the same index (a doppelganger), before sending an attestation
itself. This protects against slashing (due to double-voting) but means you will
miss two attestations when restarting. [=true].
--validator-monitor-auto Automatically monitor locally active validators (BETA) [=false].
--validator-monitor-pubkey One or more validators to monitor - works best when --subscribe-all-subnets is
enabled (BETA).
--validator-monitor-totals Publish metrics to single 'totals' label for better collection performance when
monitoring many validators (BETA) [=false].
...
All command line options can also be provided in a TOML
config file specified through the --config-file
flag. Within the config file,
you need to use the long names of all options. Please note that certain options
such as web3-url
, bootstrap-node
, direct-peer
, and validator-monitor-pubkey
can be supplied more than once on the command line - in the TOML file, you need
to supply them as arrays. There are also some minor differences in the parsing
of certain option values in the TOML files in order to conform more closely to
existing TOML standards. For example, you can freely use keywords such as on
,
off
, yes
and no
on the command-line as synonyms for the canonical values
true
and false
which are mandatory to use in TOML. Options affecting Nimbus
sub-commands should appear in a section of the file matching the sub-command name.
Here is an example config file illustrating all of the above:
# nimbus-eth2-config.toml
doppelganger-detection = true
web3-url = ["ws://192.168.1.10:8000"]
num-threads = 0
[trustedNodeSync]
trusted-node-url = "http://192.168.1.20:5052"
Migration options (advanced)
The main migration guide is located here. Here we document a couple of advanced options you can use if you wish to have more fine-grained control.
Export validators
The default command for exporting your slashing protection history is:
build/nimbus_beacon_node slashingdb export database.json
This will export your history in the correct format to database.json
.
On success you will have a message similar to:
Exported slashing protection DB to 'database.json'
Export finished: '$HOME/.cache/nimbus/BeaconNode/validators/slashing_protection.sqlite3' into 'interchange.json'
Export from a specific validators directory
The validator directory contains your validator's setup.
build/nimbus_beacon_node slashingdb export database.json --validators-dir=path/to/validatorsdir/
Export from a specific data directory
The data directory (data-dir
) contains your beacon node setup.
build/nimbus_beacon_node slashingdb export database.json --data-dir=path/to/datadir/
Partial exports
You can perform a partial export by specifying the public key of the relevant validator you wish to export.
build/nimbus_beacon_node slashingdb export database.json --validator=0xb5da853a51d935da6f3bd46934c719fcca1bbf0b493264d3d9e7c35a1023b73c703b56d598edf0239663820af36ec615
If you wish to export multiple validators, you must specify the --validator
option multiple times.
Import validators
The default command for importing your validator's slashing protection history into the database is:
build/nimbus_beacon_node slashingdb import database.json
Import to a specific validators directory
The validator directory contains your validator's setup.
build/nimbus_beacon_node slashingdb import database.json --validators-dir=path/to/validatorsdir/
Import to a specific data directory
The data directory contains your beacon node's setup.
build/nimbus_beacon_node slashingdb import database.json --data-dir=path/to/datadir/
Troubleshooting
⚠️ The commands on this page refer to the Prater testnet. If you're running mainnet, replace
prater
withmainnet
in the commands below.
As it stands, we are continuously making improvements to both stability and memory usage. So please make sure you keep your client up to date! This means restarting your node and updating your software regularly from the stable
branch. If you can't find a solution to your problem here, feel free to get in touch with us on our discord!
Note: While the
stable
branch of thenimbus-eth2
repository is more stable, the latest updates happen in theunstable
branch which is (usually) merged into master every week on Tuesday. If you choose to run Nimbus directly from theunstable
branch, be prepared for instabilities!
Networking
For more complete advice on fine-tuning your networking setup see here
Low peer count
If you see a message that looks like the following in your logs:
Peer count low, no new peers discovered...
Your node is finding it hard to find peers. It's possible that you may be behind a firewall. Try restarting your client and passing --nat:extip:$EXT_IP_ADDRESS
as an option to ./run-prater-beacon-node.sh
, where $EXT_IP_ADDRESS
is your real IP. For example, if your real IP address is 1.2.3.4
, you'd run:
./run-prater-beacon-node.sh --nat:extip:1.2.3.4
If this doesn't improve things, you may need to set enr-auto-update and/or set up port forwarding.
No peers for topic
If you see a message that looks like the following in your logs:
No peers for topic, skipping publish...
This means you've missed an attestation because either your peer count is too low, or the quality of your peers is lacking.
There can be several reasons behind why this is the case. The first thing to check is that your max peer count (--max-peers
) hasn't been set too low. In order to ensure your attestations are published correctly, --max-peers
should be set to 70, at the very least.
Note that Nimbus manages peers slightly differently to other clients (we automatically connect to more peers than we actually use, in order not to have to do costly reconnects). As such,
--max-peers
is set to 160 by default.
If this doesn't fix the problem, please double check your node is able to receive incoming connections.
Misc
Console hanging for too long on update
To update and restart, run git pull
, make update
, followed by make nimbus_beacon_node
:
cd nimbus-eth2
git pull
make update # Update dependencies
make nimbus_beacon_node # Rebuild beacon node
./run-prater-beacon-node.sh # Restart using same keys as last run
If you find that make update
causes the console to hang for too long, try running make update V=1
or make update V=2
instead (these will print a more verbose output to the console which may make it easier to diagnose the problem).
Note: rest assured that when you restart the beacon node, the software will resume from where it left off, using the validator keys you have already imported.
Starting over after importing wrong keys
The directory that stores the blockchain data of the testnet is build/data/prater_shared_0
(if you're connecting to another testnet, replace prater
with that testnet's name). If you've imported the wrong keys, and wish to start over, delete this repository.
Sync problems
If you’re experiencing sync problems, we recommend running make clean-prater
to delete the database and restart your sync (make sure you’ve updated to the latest stable
first though).
Warning:
make clean-prater
will erase all of your syncing progress so far, so it should only be used as a last resort -- if your client gets stuck for a long time (because it's unable to find the right chain and/or stay with the same head value) and a normal restart doesn't improve things.
noCommand does not accept arguments
If, on start, you see The command 'noCommand' does not accept arguments
Double check to see if your command line flags are in the correct format, i.e. --foo=bar
, --baz
, or --foo-bar=qux
.
Address already in use error
If you're seeing an error that looks like:
Error: unhandled exception: (98) Address already in use [TransportOsError]
It's probably because you're running multiple validators -- and the default base port 9000
is already in use.
To change the base port, run:
./run-prater-beacon-node.sh --tcp-port=9100 --udp-port=9100
(You can replace 9100
with a port of your choosing)
Catching up on validator duties
If you're being flooded with Catching up on validator duties
messages, then your CPU is probably too slow to run Nimbus. Please check that your setup matches our system requirements.
Local timer is broken error
If you cannot start your validator because you are seeing logs that look like the following:
WRN 2021-01-08 06:32:46.975+00:00 Local timer is broken or peer's status information is invalid topics="beacnde" tid=120491 file=sync_manager.nim:752 wall_clock_slot=271961 remote_head_slot=271962 local_head_slot=269254 peer=16U*mELUgu index=0 tolerance_value=0 peer_speed=2795.0 peer_score=200
This is likely due to the fact that your local clock is off. To compare your local time with a internet time, run:
cat </dev/tcp/time.nist.gov/13 ; date -u
The first line in the output will give you internet time. And the second line will give you the time according to your machine. These shouldn't be more than a second apart.
Eth1 chain monitor failure
If you see an error that looks like the following:
{"lvl":"ERR","ts":"2021-05-11 09:05:53.547+00:00","msg":"Eth1 chain monitoring failure, restarting","topics":"eth1","tid":1,"file":"eth1_monitor.nim:1158","err":"Trying to access value with err: Failed to setup web3 connection"}
It's because your node can't connect to the web3 provider you have specified. Please double check that you've correctly specified your provider. If you haven't done so already, we recommend adding a backup.
Discovered new external address warning log
WRN 2021-03-11 13:26:25.943-08:00
Discovered new external address but ENR auto update is off
topics="discv5" tid=77655 file=protocol.nim:940 majority=Some("myIPaddressHere":9000) previous=None[Address]
This message is displayed regularly when Nimbus canot detect your correct IP address. It may be a sign that you have a dynamic IP address that keeps changing. Or that Nimbus is unable to get your IP from the UPnP.
The first step is to try relaunching the beacon node with the --enr-auto-update
option.
If that doesn't fix the problem, double check that your ports are open and that you have port forwarding enabled on your gateway (assuming that you are behind a NAT).
See our page on monitoring the health of your node for more.
Raspberry Pi
Trouble transferring data to/from USB3.0 SSDs
We have seen reports of extremely degraded performance when using several types of USB3.0 to SSD adapter or when using native USB3.0 disk drives. This post details why there is a difference in behaviour from models prior to Pi 4 and the recommended workaround.
For Developers
This page contains tips and tricks for developers, further resources, along with information on how to set up your build environment on your platform.
Before building Nimbus for the first time, make sure to install the prerequisites.
Code style
The code follows the Status Nim Style Guide.
Branch lifecycle
The git repository has 3 main branches, stable
, testing
and unstable
as well as feature and bugfix branches.
Unstable
The unstable
branch contains features and bugfixes that are actively being tested and worked on.
- Features and bugfixes are generally pushed to individual branches, each with their own pull request against the
unstable
branch. - Once the branch has been reviewed and passed CI, the developer or reviewer merges the branch to
unstable
. - The
unstable
branch is regularly deployed to the Nimbus Prater fleet where additional testing happens.
Testing
The testing
branch contains features and bugfixes that have gone through CI and initial testing on the unstable
branch and are ready to be included in the next release.
- After testing a bugfix or feature on
unstable
, the features and fixes that are planned for the next release get merged to thetesting
branch either by the release manager or team members. - The
testing
branch is regularly deployed to the Nimbus prater fleet as well as a smaller mainnet fleet. - The branch should remain release-ready at most times.
Stable
The stable
branch tracks the latest released version of Nimbus and is suitable for mainnet staking.
Build system
Windows
mingw32-make # this first invocation will update the Git submodules
You can now follow the instructions in this this book by replacing make
with mingw32-make
(you should run mingw32
regardless of whether you're running 32-bit or 64-bit architecture):
mingw32-make test # run the test suite
Linux, macOS
After cloning the repo:
# Build nimbus_beacon_node and all the tools, using 4 parallel Make jobs
make -j4
# Run tests
make test
# Update to latest version
git pull
make update
Environment
Nimbus comes with a build environment similar to Python venv - this helps ensure that the correct version of Nim is used and that all dependencies can be found.
./env.sh bash # start a new interactive shell with the right env vars set
which nim
nim --version # Nimbus is tested and supported on 1.2.12 at the moment
# or without starting a new interactive shell:
./env.sh which nim
./env.sh nim --version
# Start Visual Studio code with environment
./env.sh code
Makefile tips and tricks for developers
- build all those tools known to the Makefile:
# $(nproc) corresponds to the number of cores you have
make -j $(nproc)
- build a specific tool:
make state_sim
- you can control the Makefile's verbosity with the V variable (defaults to 0):
make V=1 # verbose
make V=2 test # even more verbose
- same for the Chronicles log level:
make LOG_LEVEL=DEBUG bench_bls_sig_agggregation # this is the default
make LOG_LEVEL=TRACE nimbus_beacon_node # log everything
- pass arbitrary parameters to the Nim compiler:
make NIMFLAGS="-d:release"
- you can freely combine those variables on the
make
command line:
make -j$(nproc) NIMFLAGS="-d:release" USE_MULTITAIL=yes eth2_network_simulation
make USE_LIBBACKTRACE=0 # expect the resulting binaries to be 2-3 times slower
- disable
-march=native
because you want to run the binary on a different machine than the one you're building it on:
make NIMFLAGS="-d:disableMarchNative" nimbus_beacon_node
- disable link-time optimisation (LTO):
make NIMFLAGS="-d:disableLTO" nimbus_beacon_node
- show C compiler warnings:
make NIMFLAGS="-d:cwarnings" nimbus_beacon_node
- limit stack usage to 1 MiB per C function (static analysis - see the GCC docs; if LTO is enabled, it works without
-d:cwarnings
):
make NIMFLAGS="-d:limitStackUsage" nimbus_beacon_node
- build a static binary:
make NIMFLAGS="--passL:-static" nimbus_beacon_node
- publish a book using mdBook from sources in "docs/" to GitHub pages:
make publish-book
- create a binary distribution:
make dist
Multi-client interop scripts
This repository contains a set of scripts used by the client implementation teams to test interop between the clients (in certain simplified scenarios). It mostly helps us find and debug issues.
Stress-testing the client by limiting the CPU power
make prater CPU_LIMIT=20
The limiting is provided by the cpulimit utility, available on Linux and macOS. The specified value is a percentage of a single CPU core. Usually 1 - 100, but can be higher on multi-core CPUs.
Build and run the local beacon chain simulation
The beacon chain simulation runs several beacon nodes on the local machine, attaches several local validators to each, and builds a beacon chain between them.
To run the simulation:
make update
make eth2_network_simulation
To clean the previous run's data:
make clean_eth2_network_simulation_all
To change the number of validators and nodes:
# Clear data files from your last run and start the simulation with a new genesis block:
make VALIDATORS=192 NODES=6 USER_NODES=1 eth2_network_simulation
If you’d like to see the nodes running on separated sub-terminals inside one big window, install Multitail (if you're on a Mac, follow the instructions here), then:
USE_MULTITAIL="yes" make eth2_network_simulation
You’ll get something like this (click for full size):
You can find out more about the beacon node simulation here.
Build and run the local state transition simulation
This simulation is primarily designed for researchers, but we'll cover it briefly here in case you're curious :)
The state transition simulation quickly runs the beacon chain state transition function in isolation and outputs JSON snapshots of the state (directly to the nimbus-eth2
directory). It runs without networking and blocks are processed without slot time delays.
# build the state simulator, then display its help ("-d:release" speeds it
# up substantially, allowing the simulation of longer runs in reasonable time)
make NIMFLAGS="-d:release" state_sim
build/state_sim --help
Use the output of the help
command to pass desired values to the simulator - experiment with changing the number of slots, validators, , etc. to get different results.
The most important options are:
slots
: the number of slots to run the simulation for (default 192)validators
: the number of validators (default 6400)attesterRatio
: the expected fraction of attesters that actually do their work for every slot (default 0.73)json_interval
: how often JSON snapshots of the state are outputted (default every 32 slots -- or once per epoch)
For example, to run the state simulator for 384 slots, with 20,000 validators, and an average of 66% of attesters doing their work every slot, while outputting snapshots of the state twice per epoch, run:
build/state_sim --slots=384 --validators=20000 --attesterRatio=0.66 --json_interval=16
Contribute
Follow these steps to contribute to this book!
We use an utility tool called mdBook to create online books from Markdown files.
Before You Start
- Install mdBook from here.
- Clone the repository by
git clone https://github.com/status-im/nimbus-eth2.git
. - Go to where the Markdown files are located by
cd docs/the_nimbus_book/
.
Real-Time Update and Preview Changes
- Run
mdbook serve
in the terminal. - Preview the book at http://localhost:3000.
Build and Deploy
The first step is to submit a pull request to the unstable branch. Then, after it is merged, do the following under our main repository:
cd nimbus-eth2
git checkout unstable
git pull
make update
(This is to update the submodules to the latest version)make publish-book
Troubleshooting
If you see file conflicts in the pull request, this may due to that you have created your new branch from an old version of the unstable
branch. Update your new branch using the following commands:
git checkout unstable
git pull
make update
git checkout readme
git merge unstable
# use something like "git mergetool" to resolve conflicts, then read the instructions for completing the merge (usually just a `git commit`)
# check the output of "git diff unstable"
Thank you so much for your help to the decentralized and open source community. :)
Resources
-
ethstaker discord: great place for tips and discussions
-
Validator launchpad: to send deposits
-
Beacon chain explorer : to monitor network health
-
Nimbus discord : best place to ask questions and to stay up-to-date with critical updates
-
Ethereum on ARM: Raspberry Pi 4 image + tutorial : turn your Raspberry Pi 4 into an eth1 or eth2 node just by flashing the MicroSD card
Binary distribution internals
Reproducibility
The binaries we build in GitHub Actions and distribute in our releases come from an intricate process meant to ensure reproducibility.
While the ability to produce the same exact binaries from the corresponding Git commits is a good idea for any open source project, it is a requirement for software that deals with digital tokens of significant value.
Docker containers for internal use
The easiest way to guarantee that users are able to replicate
our binaries for themselves is to give them the same software environment we used in CI. Docker
containers fit the bill, so everything starts with the architecture- and
OS-specific containers in docker/dist/base_image/
.
These images contain all the packages we need, are built and published once (to
Docker Hub), and are then reused as the basis for temporary Docker
images where the nimbus-eth2
build is carried out.
These temporary images are controlled by Dockerfiles in docker/dist/
. Since
we're not publishing them anywhere, we can customize them to the system
they run on (we ensure they use the host's UID/GID, the host's QEMU static
binaries, etc); they get access to the source code through the use of external volumes.
Build process
It all starts from the GitHub actions in .github/workflows/release.yml
. There
is a different job for each supported OS-architecture combination and they all
run in parallel (ideally).
Once all those CI jobs complete successfully, a GitHub release draft is created and all the distributable archives are uploaded to it. A list of checksums for the main binaries is inserted in the release description. That draft needs to be manually published.
The build itself is triggered by a Make target. E.g.: make dist-amd64
. This invokes
scripts/make_dist.sh
which builds the corresponding Docker container from
docker/dist/
and runs it with the Git repository's top directory as an external
volume.
The entry point for that container is docker/dist/entry_point.sh
and that's
where you'll find the Make invocations needed to finally build the software and
create distributable tarballs.
Docker images for end users
Configured in .github/workflows/release.yml
(only for Linux AMD64, ARM and
ARM64): we unpack the distribution tarball and copy its content into a third
type of Docker image - meant for end users and defined by
docker/dist/binaries/Dockerfile.amd64
(and related).
We then publish that to Docker Hub.
Prater testnet
prater
is a testnet that you can use to verify that your setup is ready for mainnet, as well as safely practise node operations such as adding and removing validators, migrating between clients and performing upgrades and backups.
The prater
testnet is run by client teams, the Ethereum Foundation and community members.
Connecting to prater
and setting up a validator follows the same procedure as a normal mainnet node with the following modifications:
- Validator deposits are done on the
goerli
testnet via the Prater launchpad - To run a Prater node after making a deposit, update Nimbus and then execute
./run-prater-beacon-node.sh
or use the--network:prater
command line option.
Custom testnets
You can connect to any network provided that you have a configuration and genesis file, using the network
option:
build/nimbus_beacon_node --network:path/to/network --data-dir:path/to/data
The network directory must have the same layout as the eth2-networks repository testnets.
Other testnets
Historical testnets can be found here.
pyrmont
- deprecated in favour ofprater
due to its small validator count compared tomainnet
insecura
- a spin-off ofprater
to demonstrate the weak subjectivity attackmedalla
- one of the first multi-client testnets, deprecated in favour ofpyrmont
to capture the latest 1.0 spec changes
Security Audit
Summary
Nimbus has undergone an extensive, multi-vendor (ConsenSys Diligence, NCC Group, and Trail of Bits) security assessment over a period of many months. During that process, we were notified of several issues within the codebase. These issues have been addressed and the overall security of the Nimbus-eth2 software has been drastically improved.
Additionally, as a result of the work done from our security vendors, we are now working to incorporate many new security processes and tooling to improve our ability to find security issues in the future.
For more information on the issues and how they were addressed, the interested reader should direct themselves to the scoped repositories; all reported issues and their mitigations are open to the public.
History
Back in May of last year (2020), Status and the Nimbus Team posted a Request for Proposal document regarding the security assessment of the nimbus-eth2 repository (formerly nim-beacon-chain
) and its software dependencies.
After thoroughly vetting and weighing the submitted proposals, 3 security vendors were chosen to review the codebase for a timeline of approximately 3 months.
The kickoff announcement can be read here.
We separated the codebase into sub-topics with various tasks. These tasks were then broken up and assigned to the vendor(s) with the required expertise.
The desired deliverable outcome was GitHub issues in the repositories under review, which is a shift from the standard “assessment report” provided by most security assessments in the space. You can view the issues here.
To be very clear, we did not engage in this security assessment to get a stamp of approval from the security community. All of the effort put into creating this process and engaging the community was in the service of increasing the level of security and code quality of the Nimbus software.
Frequently Asked Questions
General
How do I check which version of Nimbus I'm currently running?
If you've enabled RPC, the version is available via
curl -d '{"jsonrpc":"2.0","method":"get_v1_node_version","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s
You can also run build/nimbus_beacon_node --version
Why are metrics not working?
The metrics server is disabled by default, enable it by passing --metrics
to the run command:
./run-mainnet-beacon-node.sh --metrics ...
Why does my validator miss two epochs of attestations after restarting?
When a validator is started (or restarted) it prudently listens for 2 epochs for attestations from a validator with the same index (a doppelganger), before sending an attestation itself.
In sum, it's a simple way of handling the case where one validator comes online with the same key as another validator that's already online (i.e one device was started without switching the other off).
While this strategy requires the client to wait two whole epochs on restart before attesting, a couple of missed attestations is a very minor price to pay in exchange for significantly reducing the risk of an accidental slashing.
You can think of it as a small penalty that you pay only on first launch and restarts. When you take into account the total runtime of your validator, the impact should be minimal.
While we strongly recommend it, you can disable it with an explicit flag (--doppelganger-detection=false
) if you don't plan on moving your setup.
What's the best way to stress test my eth1 + eth2 setup before committing with real ETH?
We recommend running a Nimbus beacon node on Prater and a mainnet eth1 client on the same machine.
To stress test it, add--subscribe-all-subnets
to the beacon node options. This represents more or less the maximum load you could have on eth2.
How do I add an additional validator?
To add an additional validator, just follow the same steps as you did when you added your first. You'll have to restart the beacon node for the changes to take effect.
Note that a single Nimbus instance is able to handle multiple validators.
Networking
How can I improve my peer count?
See here.
How do I fix the discovered new external address warning log?
WRN 2021-03-15 02:23:37.569+00:00 Discovered new external address but ENR auto update is off topics="discv5"...
It's possible that your ISP has changed your dynamic IP address without you knowing.
The first thing to do it to try relaunching the beacon node with --enr-auto-update
(pass it as an option in the command line).
If this doesn't fix the problem, the next thing to do is to check your external (public) IP address and detect open ports on your connection - you can use https://www.yougetsignal.com/tools/open-ports/. Note that Nimbus TCP
and UDP
ports are both set to 9000
by default.
See here, for how to set up port forwarding.
Folder Permissions
To protect against key loss, Nimbus requires that files and directories be owned by the user running the application. Furthermore, they should not be readable by others.
It may happen that the wrong permissions are applied, particularly when creating the directories manually.
The following errors are a sign of this:
Data folder has insecure ACL
Data directory has insecure permissions
File has insecure permissions
Here is how to fix them.
Linux/ BSD / MacOS
Run:
# Changing ownership to `user:group` for all files/directories in <data-dir>.
chown user:group -R <data-dir>
# Set permissions to (rwx------ 0700) for all directories starting from <data-dir>
find <data-dir> -type d -exec chmod 700 {} \;
# Set permissions to (rw------- 0600) for all files inside <data-dir>/validators
find <data-dir>/validators -type f -exec chmod 0600 {} \;
# Set permissions to (rw------- 0600) for all files inside <data-dir>/secrets
find <data-dir>/secrets -type f -exec chmod 0600 {} \;
In sum:
-
Directories
<data-dir>
,<data-dir>/validators
,<data-dir>/secrets
MUST be owned by user and haverwx------
or0700
permissions set. -
Files stored inside
<data-dir>
,<data-dir>/validators
,/secrets
MUST be owned by user and haverw------
or0600
permission set.
Windows
From inside Git Bash
, run:
# Set permissions for all the directories starting from <data-dir>
find <data-dir> -type d -exec icacls {} /inheritance:r /grant:r $USERDOMAIN\\$USERNAME:\(OI\)\(CI\)\(F\) \;
# Set permissions for all the files inside <data-dir>/validators
find <data-dir>/validators -type f -exec icacls {} /inheritance:r /grant:r $USERDOMAIN\\$USERNAME:\(F\) \;
# Set permissions for all the files inside <data-dir>/secrets
find <data-dir>/secrets -type f -exec icacls {} /inheritance:r /grant:r $USERDOMAIN\\$USERNAME:\(F\) \;
N.B. Make sure you run the above from inside
Git Bash
, these commands will not work from inside the standard Windows Command Prompt. If you don't already have aGit Bash
shell, you'll need to install Git for Windows.
In sum:
-
Directories
<data-dir>
,<data-dir>/validators
,<data-dir>/secrets
MUST be owned by user and have permissions set for the user only (OI)(CI)(F). All inherited permissions should be removed. -
Files which are stored inside
, /validators, /secrets MUST be owned by user and have permissions set for the user only (F). All inherited permissions should be removed.
Validating
What exactly is a validator?
A validator is an entity that participates in the consensus of the Ethereum 2.0 protocol.
Or in plain english, a human running a computer process. This process proposes and vouches for new blocks to be added to the blockchain.
In other words, you can think of a validator as a voter for new blocks. The more votes a block gets, the more likely it is to be added to the chain.
Importantly, a validator's vote is weighted by the amount it has at stake.
What is the deposit contract?
You can think of it as a transfer of funds between Ethereum 1.0 accounts and Ethereum 2.0 validators.
It specifies who is staking, who is validating, how much is being staked, and who can withdraw the funds.
Why do validators need to have funds at stake?
Validators need to have funds at stake so they can be penalized for behaving dishonestly.
In other words, to keep them honest, their actions need to have financial consequences.
How much ETH does a validator need to stake?
Before a validator can start to secure the network, he or she needs to stake 32 ETH. This forms the validator's initial balance.
Is there any advantage to having more than 32 ETH at stake?
No. There is no advantage to having more than 32 ETH staked.
Limiting the maximum stake to 32 ETH encourages decentralization of power as it prevents any single validator from having an excessively large vote on the state of the chain.
Remember that a validator’s vote is weighted by the amount it has at stake.
Can I stop my validator for a few days and then start it back up again?
Yes but, under normal conditions, you will lose an amount of ETH roughly equivalent to the amount of ETH you would have gained in that period. In other words, if you stood to earn ≈0.01 ETH, you would instead be penalised ≈0.01 ETH.
I want to switch my validator keys to another machine, how long do I need to wait to avoid getting slashed?
We recommend waiting 2 epochs (around 15 minutes), before restarting Nimbus on a different machine.
When should I top up my validator's balance?
The answer to this question very much depends on how much ETH you have at your disposal.
You should certainly top up if your balance is close to 16 ETH: this is to ensure you don't get removed from the validator set (which automatically happens if your balance falls below 16 ETH).
At the other end of the spectrum, if your balance is closer to 31 ETH, it's probably not worth your while adding the extra ETH required to get back to 32.
When can I withdraw my funds, and what's the difference between exiting and withdrawing?
You can signal your intent to stop validating by signing a voluntary exit message with your validator.
However, bear in mind that in Phase 0, once you've exited, there's no going back.
There's no way for you to activate your validator again, and you won't be able to transfer or withdraw your funds until at least Phase 1.5 (which means your funds will remain inaccessible until then).
How are validators incentivized to stay active and honest?
In addition to being penalized for being offline, validators are penalized for behaving maliciously – for example attesting to invalid or contradicting blocks.
On the other hand, they are rewarded for proposing / attesting to blocks that are included in the chain.
The key concept is the following:
- Rewards are given for actions that help the network reach consensus
- Minor penalties are given for inadvertant actions (or inactions) that hinder consensus
- And major penalities -- or slashings -- are given for malicious actions
In other words, validators that maximize their rewards also provide the greatest benefit to the network as a whole.
How are rewards/penalties issued?
Remember that each validator has its own balance -- with the initial balance outlined in the deposit contract.
This balance is updated periodically by the Ethereum network rules as the validator carries (or fails to carry) out his or her responsibilities.
Put another way, rewards and penalties are reflected in the validator's balance over time.
How often are rewards/penalties issued?
Approximately every six and a half minutes -- a period of time known as an epoch.
Every epoch, the network measures the actions of each validator and issues rewards or penalties appropriately.
How large are the rewards/penalties?
There is no easy answer to this question as there are many factors that go into this calculation.
Arguably the most impactful factor on rewards earned for validating transactions is the total amount of stake in the network. In other words, the total amount of validators. Depending on this figure the max annual return rate for a validator can be anywhere between 2 and 20%.
Given a fixed total number of validators, the rewards/penalties predominantly scale with the balance of the validator -- attesting with a higher balance results in larger rewards/penalties whereas attesting with a lower balance results in lower rewards/penalties.
Note however that this scaling mechanism works in a non-obvious way. To understand the precise details of how it works requires understanding a concept called effective balance. If you're not yet familiar with this concept, we recommend you read through this excellent post.
Why do rewards depend on the total number of validators in the network?
Block rewards are calculated using a sliding scale based on the total amount of ETH staked on the network.
In plain english: if the total amount of ETH staked is low, the reward (interest rate) is high, but as the total stake rises, the reward (interest) paid out to each validator starts to fall.
Why a sliding scale? While we won't get into the gory details here, the basic intution is that there needs to be a minimum number of validators (and hence a minimum amount of ETH staked) for the network to function properly. So, to incentivize more validators to join, it's important that the interest rate remains high until this minimum number is reached.
Afterwards, validators are still encouraged to join (the more validators the more decentralized the network), but it's not absolutely essential that they do so (so the interest rate can fall).
How badly will a validator be penalized for being offline?
It depends. In addition to the impact of effective balance there are two important scenarios to be aware of:
-
Being offline while a supermajority (2/3) of validators is still online leads to relatively small penalties as there are still enough validators online for the chain to finalize. This is the expected scenario.
-
Being offline at the same time as more than 1/3 of the total number of validators leads to harsher penalties, since blocks do not finalize anymore. This scenario is very extreme and unlikely to happen.
Note that in the second (unlikely) scenario, validators stand to progressively lose up to 50% (16 ETH) of their stake over 21 days. After 21 days they are ejected out of the validator pool. This ensures that blocks start finalizing again at some point.
How great does an honest validator's uptime need to be for it to be net profitable?
Overall, validators are expected to be net profitable as long as their uptime is greater than 50%.
This means that validators need not go to extreme lengths with backup clients or redundant internet connections as the repercussions of being offline are not so severe.
How much will a validator be penalized for acting maliciously?
Again, it depends. Behaving maliciously – for example attesting to invalid or contradicting blocks, will lead to a validator's stake being slashed.
The minimum amount that can be slashed is 1 ETH, but this number increases if other validators are slashed at the same time.
The idea behind this is to minimize the losses from honest mistakes, but strongly disincentivize coordinated attacks.
What exactly is slashing?
Slashing has two purposes: (1) to make it prohibitively expensive to attack eth2, and (2) to stop validators from being lazy by checking that they actually perform their duties. Slashing a validator is to destroy (a portion of) the validator’s stake if they act in a provably destructive manner.
Validators that are slashed are prevented from participating in the protocol further and are forcibly exited.
What happens I lose my signing key?
If the signing key is lost, the validator can no longer propose or attest.
Over time, the validator's balance will decrease as he or she is punished for not participating in the consensus process. When the validator's balance reaches 16 Eth, he or she will be automatically exited from the validator pool.
However, all is not lost. Assuming validators derive their keys using EIP2334 (as per the default onboarding flow)then validators can always recalculate their signing key from their withdrawal key.
The 16 Eth can then be withdrawn -- with the withdrawal key -- after a delay of around a day.
Note that this delay can be longer if many others are exiting or being kicked out at the same time.
What happens if I lose my withdrawal key?
If the withdrawal key is lost, there is no way to obtain access to the funds held by the validator.
As such, it's a good idea to create your keys from mnemonics which act as another backup. This will be the default for validators who join via this site's onboarding process.
What happens if my withdrawal key is stolen?
If the withdrawal key is stolen, the thief can transfer the validator’s balance, but only once the validator has exited.
If the signing key is not under the thief’s control, the thief cannot exit the validator.
The user with the signing key could attempt to quickly exit the validator and then transfer the funds -- with the withdrawal key -- before the thief.
Why two keys instead of one?
In a nutshell, security. The signing key must be available at all times. As such, it will need to be held online. Since anything online is vulnerable to being hacked, it's not a good idea to use the same key for withdrawals.