The Nimbus book

If you're eager to get started, check out our quickstart guide.

Nimbus is a client implementation for both Ethereum 2.0 and Ethereum 1.0 that strives to be as lightweight as possible in terms of resources used. This allows it to perform well on embedded systems, resource-restricted devices -- including Raspberry Pis and mobile devices.

However, resource-restricted hardware is not the only thing Nimbus is good for. Its low resource consumption makes it easy to run Nimbus together with other workloads on your server (this is especially valuable for stakers looking to lower the cost of their server instances).



This book explains the ways in which you can use Nimbus to either monitor the eth2 chain or become a fully-fledged validator.

N.B. The reality is that we are very early in the eth2 validating life cycle. Validating is not for everyone yet, and it comes with both risks and responsibilities. It isn't a particularly easy way to make money. You'll need to put effort into updating your software, researching hard-forks, having a robust setup... . As such, you should only stake if you are genuinely interested in securing the protocol.

Helpful resources

Why eth2?

Eth2 is a multi-year plan to improve the scalability, security, and programmability of Ethereum, without compromising on decentralisation.

In contrast to the Ethereum chain, as it currently stands, eth2 uses proof-of-stake (PoS) to secure its network. And while Ethereum as you know and love it will continue to exist as its own independent proof-of-work chain for a little while to come, the transition towards PoS starts now.

In traditional PoW, block proposers are called miners, whereas in PoS, they are called validators. In essence, miners rely on actual hardware (such as some specifically manufactured mining machines), while validators rely on software (such as Nimbus) and a good network connection.

Get in touch

Need help with anything? Join us on Status and Discord.

If you'd like to contribute to Nimbus development, our donation address is 0x70E47C843E0F6ab0991A3189c28F2957eb6d3842

Stay updated

Subscribe to our newsletter here.

Disclaimer

This documentation assumes Nimbus is in its ideal state. The project is still under active development. Please submit a Github issue if you come across a problem.

Design goals

One of our most important design goals is an application architecture that makes it simple to embed Nimbus into other software.

Another is to minimize reliance on third-party software.

A third is for the application binary to be as lightweight as possible in terms of resources used.

Integration with Status

As part of our first design goal, our primary objective here is for Nimbus to be tightly integrated into the Status messaging app.

Our dream is for you to be able to run and monitor your validator straight from Status desktop.

System requirements (recommended)


Operating System: Linux 64-bit, Windows 64-bit, macOS X 10.14+,

Memory: 4GB RAM

Storage: 160GB SSD

Internet: Reliable broadband connection


Note that in order to process incoming validator deposits from the eth1 chain, you will need to run an eth1 client in parallel to your eth2 client. While it is possible to use a third-party service like Infura, if you choose to run your own eth1 client locally, you'll need more memory and storage.

For example, you'll need at least another 290GB SSD to run geth fast sync on mainnet.

Run just the beacon node (quickstart)

This page takes you through how to run just the beacon node without a validator attached.

Running a beacon node without a validator attached can help improve the anonymity properties of the network as a whole.

It's also a necessary step to running a validator (since an active validator requires a synced beacon node).

1. Install dependencies

You'll need to install some packages in order for Nimbus to run correctly.

Linux

On common Linux distributions the dependencies can be installed with

# Debian and Ubuntu
sudo apt-get install build-essential git

# Fedora
dnf install @development-tools

# Archlinux, using an AUR manager
yourAURmanager -S base-devel

macOS

Assuming you use Homebrew to manage packages:

brew install cmake

2. Clone the Nimbus repository

Run the following command to clone the nimbus-eth2 repository:

git clone https://github.com/status-im/nimbus-eth2

3. Build the beacon node

Change into the directory and build the beacon node.

cd nimbus-eth2
make nimbus_beacon_node

Patience... this may take a few minutes.

4. Connect to mainnet

To connect to mainnet, run:

./run-mainnet-beacon-node.sh

You'll be prompted to enter a web3-provider url:

To monitor the Eth1 validator deposit contract, you'll need to pair
the Nimbus beacon node with a Web3 provider capable of serving Eth1
event logs. This could be a locally running Eth1 client such as Geth
or a cloud service such as Infura. For more information please see
our setup guide:

https://status-im.github.io/nimbus-eth2/eth1.html

Please enter a Web3 provider URL:

Press enter to skip (this is only important when you're running a validator).

Validating with a Raspberry Pi: Guide

Introduction

This page will take you through how to use your laptop to program your Raspberry Pi, get Nimbus running, and connect to the Pyrmont testnet.

One of the most important aspects of the Raspberry Pi experience is trying to make it as easy as possible to get started. As such, we try our best to explain things from first-principles.

Prerequisites

  • Raspberry Pi 4 (4GB RAM option)
  • 64GB microSD Card
  • microSD USB adapter
  • 5V 3A USB-C charger
  • Reliable Wifi connection
  • Laptop
  • Basic understanding of the command line
  • 160GB SSD

⚠️ You will need an SSD to run the Nimbus (without an SSD drive you have absolutely no chance of syncing the Ethereum blockchain). You have two options:

  1. Use an USB portable SSD disk such as the Samsung T5 Portable SSD.

  2. Use an USB 3.0 External Hard Drive Case with a SSD Disk. For example, Ethereum on Arm use an Inateck 2.5 Hard Drive Enclosure FE2011. Make sure to buy a case with an UASP compliant chip, particularly, one of these: JMicron (JMS567 or JMS578) or ASMedia (ASM1153E).

In both cases, avoid low quality SSD disks (the SSD is a key component of your node and can drastically affect both the performance and sync time). Keep in mind that you need to plug the disk to an USB 3.0 port (the blue port).

1. Download Raspberry Pi Imager

Raspberry Pi Imager is a new imaging utility that makes it simple to manage your microSD card with Raspbian (the free Pi operating system based on Debian).

You can find the download link for your operating system here: Windows, macOS, Ubuntu.

2. Download Raspian 64-bit OS (Beta)

You can find the latest version, here.

3. Plug in SD card

Use your microSD to USB adapter to plug the SD card into your computer.

4. Download Raspberry Pi OS

Open Raspberry Pi Imager and click on CHOOSE OS

Scroll down and click on Use custom

Find the OS you downloaded in step 2

4b. Write to SD card

Click on CHOOSE SD CARD. You should see a menu pop-up with your SD card listed -- Select it

Click on WRITE

Click YES

Make a cup of coffee :)

5. Set up wireless LAN

Since you have loaded Raspberry Pi OS onto a blank SD card, you will have two partitions. The first one, which is the smaller one, is the boot partition.

Create a wpa_supplicant configuration file in the boot partition with the following content:

# wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=<Insert 2 letter ISO 3166-1 country code here>

network={
    ssid="<Insert your Wifi network's name here>"
    psk="<Insert your Wifi network's password here>"
}

Note: Don't forget to replace the placeholder country, ssid, and psk values. See Wikipedia for a list of 2 letter ISO 3166-1 country codes.

6. Enable SSH (using Linux or macOS)

You can access the command line of a Raspberry Pi remotely from another computer or device on the same network using SSH.

While SSH is not enabled by default, you can enable it by placing a file named ssh, without any extension, onto the boot partition of the SD card.

When the Pi boots, it will look for the ssh file. If it is found, SSH is enabled and the file is deleted. The content of the file does not matter; it can contain text, or nothing at all.

To create an empty ssh file, from the home directory of the boot partition file, run:

touch ssh

7. Find your Pi's IP address

Since Raspberry Pi OS supports Multicast_DNS out of the box, you can reach your Raspberry Pi by using its hostname and the .local suffix.

The default hostname on a fresh Raspberry Pi OS install is raspberrypi, so any Raspberry Pi running Raspberry Pi OS should respond to:

ping raspberrypi.local

The output should look more or less as follows:

PING raspberrypi.local (195.177.101.93): 56 data bytes
64 bytes from 195.177.101.93: icmp_seq=0 ttl=64 time=13.272 ms
64 bytes from 195.177.101.93: icmp_seq=1 ttl=64 time=16.773 ms
64 bytes from 195.177.101.93: icmp_seq=2 ttl=64 time=10.828 ms
...

Keep note of your Pi's IP address. In the above case, that's 195.177.101.93

8. SSH (using Linux or macOS)

Connect to your Pi by running:

ssh [email protected]

You'll be prompted to enter a password:

[email protected]'s password:

Enter the Pi's default password: raspberry

You should see a message that looks like the following:

Linux raspberrypi 5.4.51-v8+ #1333 SMP PREEMPT Mon Aug 10 16:58:35 BST 2020 aarch64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Aug 20 12:59:01 2020

SSH is enabled and the default password for the 'pi' user has not been changed.
This is a security risk - please login as the 'pi' user and type 'passwd' to set a new password.

Followed by a command-line prompt indicating a successful connection:

[email protected]:~ $

9. Increase swap size to 2GB

The first step is to increase the swap size to 2GB (2048MB).

Note: Swap acts as a breather to your system when the RAM is exhausted. When the RAM is exhausted, your Linux system uses part of the hard disk memory and allocates it to the running application.

Use the Pi's built-in text editor nano to open up the swap file:

sudo nano /etc/dphys-swapfile

Change the value assigned to CONF_SWAPSIZE from 100 to 2048:

...

# set size to absolute value, leaving empty (default) then uses computed value
#   you most likely don't want this, unless you have an special disk situation
CONF_SWAPSIZE=2048

...

Save (Ctrl+S) and exit (Ctrl+X).

10. Reboot

Reboot your Pi to have the above changes take effect:

sudo reboot

This will cause your connection to close. So you'll need to ssh into your Pi again:

ssh [email protected]

Note: Remember to replace 195.177.101.93 with the IP address of your Pi.

10b. Boot from external SSD

Follow this guide to copy the contents of your SD card over to your SSD, and boot your Pi from your SSD.

Tips:

Make sure you connect your SSD the Pi's USB 3 port (the blue port).

If your Pi is headless (no monitor attached) you can use the rpi-clone repository to copy the contents of the SD over to the SSD; in a nutshell, replace steps 14 and 15 of the above guide with the following commands (which you should run from the Pi's home directory):

git clone https://github.com/billw2/rpi-clone.git 
cd rpi-clone
sudo cp rpi-clone rpi-clone-setup /usr/local/sbin
sudo rpi-clone-setup -t testhostname
rpi-clone sda

For more on raspi-config, see here.

To shutdown your Pi safely, run sudo shutdown -h now

Once you're done, ssh back into your Pi.

11. Install Nimbus dependencies

You'll need to install some packages (git) in order for Nimbus to run correctly.

To do so, run:

sudo apt-get install git

12. Install Screen

screen is a tool that lets you safely detach from the SSH session without exiting the remote job. In other words screen allows the commands you run on your Pi from your laptop to keep running after you've logged out.

Run the following command to install screen:

sudo apt-get install screen

13. Clone the Nimbus repository

Run the following command to clone the nimbus-eth2 repository:

git clone https://github.com/status-im/nimbus-eth2

14. Build the beacon node

Change into the directory and build the beacon node.

cd nimbus-eth2
make nimbus_beacon_node

Patience... this may take a few minutes.

15. Copy signing key over to Pi

Note: If you haven't generated your validator key(s) and/or made your deposit yet, follow the instructions on this page before carrying on.

We'll use the scp command to send files over SSH. It allows you to copy files between computers, say from your Raspberry Pi to your desktop/laptop, or vice-versa.

Copy the folder containing your validator key(s) from your computer to your pi's homefolder by opening up a new terminal window and running the following command:

scp -r <VALIDATOR_KEYS_DIRECTORY> [email protected]:

Note: Don't forget the colon (:) at the end of the command!

As usual, replace 195.177.101.93 with your Pi's IP address, and <VALIDATOR_KEYS_DIRECTORY> with the full pathname of your validator_keys directory (if you used the Launchpad command line app this would have been created for you when you generated your keys).

Tip: run pwd in your validator_keys directory to print the full pathname to the console.

16. Import signing key into Nimbus

To import your signing key into Nimbus, from the nimbus-eth2 directory run:

build/nimbus_beacon_node deposits import  --data-dir=build/data/shared_pyrmont_0 ../validator_keys

You'll be asked to enter the password you created to encrypt your keystore(s). Don't worry, this is entirely normal. Your validator client needs both your signing keystore(s) and the password encrypting it to import your key (since it needs to decrypt the keystore in order to be able to use it to sign on your behalf).

17. Run Screen

From the nimbus-eth2 directory, run:

screen

You should see output that looks like the following:

GNU Screen version 4.06.02 (GNU) 23-Oct-17

Copyright (c) 2015-2017 Juergen Weigert, Alexander Naumov, Amadeusz Slawinski
Copyright (c) 2010-2014 Juergen Weigert, Sadrul Habib Chowdhury
Copyright (c) 2008-2009 Juergen Weigert, Michael Schroeder, Micah Cowan, Sadrul Habib Chowdhury
Copyright (c) 1993-2007 Juergen Weigert, Michael Schroeder
Copyright (c) 1987 Oliver Laumann

This program is free software; you can redistribute it and/or modify it under the terms of the GNU
General Public License as published by the Free Software Foundation; either version 3, or (at your
option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
License for more details.

You should have received a copy of the GNU General Public License along with this program (see the file
COPYING); if not, see http://www.gnu.org/licenses/, or contact Free Software Foundation, Inc., 51
Franklin Street, Fifth Floor, Boston, MA  02111-1301  USA.

Send bugreports, fixes, enhancements, t-shirts, money, beer & pizza to [email protected]


Capabilities:
+copy +remote-detach +power-detach +multi-attach +multi-user +font +color-256 +utf8 +rxvt
+builtin-telnet

Press Enter or Space.

18. Connect to Pyrmont

We're finally ready to connect to Pyrmont!

Note: If you haven't already, we recommend registering for, and running, your own Infura endpoint to connect to eth1. For instruction on how to do so, see this page.

To connect to pyrmont, run:

./run-pyrmont-beacon-node.sh

You'll be prompted to enter a web3-provider url:

To monitor the Eth1 validator deposit contract, you'll need to pair
the Nimbus beacon node with a Web3 provider capable of serving Eth1
event logs. This could be a locally running Eth1 client such as Geth
or a cloud service such as Infura. For more information please see
our setup guide:

https://status-im.github.io/nimbus-eth2/eth1.html

Please enter a Web3 provider URL:

Enter your own secure websocket (wss) endpoint.

19. Check for successful connection

If you look near the top of the logs printed to your console, you should see confirmation that your beacon node has started, with your local validator attached:

INF 2020-12-01 11:25:33.487+01:00 Launching beacon node
...
INF 2020-12-01 11:25:34.556+01:00 Loading block dag from database            topics="beacnde" tid=19985314 file=nimbus_beacon_node.nim:198 path=build/data/shared_pyrmont_0/db
INF 2020-12-01 11:25:35.921+01:00 Block dag initialized
INF 2020-12-01 11:25:37.073+01:00 Generating new networking key
...
NOT 2020-12-01 11:25:45.267+00:00 Local validator attached                   tid=22009 file=validator_pool.nim:33 pubKey=95e3cbe88c71ab2d0e3053b7b12ead329a37e9fb8358bdb4e56251993ab68e46b9f9fa61035fe4cf2abf4c07dfad6c45 validator=95e3cbe8
...
NOT 2020-12-01 11:25:59.512+00:00 Eth1 sync progress                         topics="eth1" tid=21914 file=eth1_monitor.nim:705 blockNumber=3836397 depositsProcessed=106147
NOT 2020-12-01 11:26:02.574+00:00 Eth1 sync progress                         topics="eth1" tid=21914 file=eth1_monitor.nim:705 blockNumber=3841412 depositsProcessed=106391
...
INF 2020-12-01 11:26:31.000+00:00 Slot start                                 topics="beacnde" tid=21815 file=nimbus_beacon_node.nim:505 lastSlot=96566 scheduledSlot=96567 beaconTime=1w6d9h53m24s944us774ns peers=7 head=b54486c4:96563 headEpoch=3017 finalized=2f5d12e4:96479 finalizedEpoch=3014
INF 2020-12-01 11:26:36.285+00:00 Slot end                                   topics="beacnde" tid=21815 file=nimbus_beacon_node.nim:593 slot=96567 nextSlot=96568 head=b54486c4:96563 headEpoch=3017 finalizedHead=2f5d12e4:96479 finalizedEpoch=3014

To keep track of your syncing progress, have a look at the output at the very bottom of the terminal window in which your validator is running. You should see something like:

peers: 15 ❯ finalized: ada7228a:8765 ❯ head: b2fe11cd:8767:2 ❯ time: 9900:7 (316807) ❯ sync: wPwwwwwDwwDPwPPPwwww:7:1.2313:1.0627:12h01m(280512)

Keep an eye on the number of peers your currently connected to (in the above case that's 15), as well as your sync progress.

Note: 15 - 20 peers and an average sync speed of 0.5 - 1.0 blocks per second is normal on Pyrmont with a Pi. If you're sync speed is much slower than this, the root of the problem may be your USB3.0 to SSD adapter. See this post for a recommended workaround.

20. End ssh session and logout

To detach your screen session but leave your processes running, press Ctrl-A followed by Ctrl-D. You can now exit your ssh session (Ctrl-C) and switch off your laptop.

Verifying your progress is as simple as sshing back into your Pi and typing screen -r. This will resume your screen session (and you will be able to see your node's entire output since you logged out).

Professional setup advice

While screen is a nice tool for testing, it's not really a good idea to rely on it for serious use. For a more professional setup, we recommend setting up a systemd service with an autorestart on boot (should you experience an unexpected power outage, this will ensure your validator restarts correctly).

For the details on how to do this, see this page.

Validating with a Raspberry Pi: Mainnet advice

Whether or not your Pi is up to the task will depend on a number of factors such as SSD speed, network connectivity, etc. As such, it's best to verify performance on a testnet first.

The best thing you can do is to set your Pi to run Pyrmont. If you have no trouble syncing and attesting on Pyrmont, your setup should be more than good enough for mainnet as well (Mainnet is expected to use fewer resources).

Although we don't expect a modern Pi to fail, we recommend buying a spare Pi, and enterprise grade SSD, on the off-chance it does; keep your original SD around, to make it easy for you to copy the image over.

Finally in order to make sure your Pi autorestarts on boot, we recommend setting up a systemd service. For the details on how to do this, see this page

Rocket Pool: Introductory guide

This guide offers a bare-bones introduction to getting up and running with Nimbus and Rocket Pool -- a trustless staking pool which matches those who wish to stake some ETH with those who wish to operate a node.

Nota Bene: Rocket Pool is not only for node operators. Staking in Rocket Pool as a regular user is as easy as navigating to the Rocket Pool website, entering an amount of ETH to stake, and clicking Start! When you stake, you will immediately receive an amount of rETH with equivalent value to the ETH you deposit. This allows anyone, no matter how technical or wealthy, to help contribute to the decentralisation of the network.

It assumes you are familiar with the basics of how Rocket Pool works. If that's not the case, we recommend reading through the following resources first:

If you're a Raspberry Pi user, we highly recommend this wonderful and complementary resouce by community member Joe Clapis.

Note: Rocket Pool is currently running their Beta Finale on Pyrmont testnet, so this is the perfect time to get up to speed and play around with their stack.

1. Install Docker + Compose

If you're using Ubuntu, Debian, CentOS or Fedora, please skip this step.

To install Docker and Compose follow the instructions here and here.

Note that Docker Desktop for Mac and Windows already include Compose, which means that if you're using a Mac or Windows device you can ignore the second link.

2. Install smart node client

Background: The Rocket Pool smart node software stack provides all of the necessary infrastructure for running a node in the Rocket Pool network: it contains a smart node client, which provides a command-line interface for managing a smart node either locally or remotely (over SSH) and a smart node service; which provides an API for client communication and performs background node tasks (such as validator duties).

You can install the smart node client with either curl or wget.

To see which tool you have available, run:

curl --version
wget --version

Once you know whether you have curl or wget available, you can find the relevant command for your operating system here.

For example, if you're running MacOS with curl installed, you should run:

curl -L https://github.com/rocket-pool/smartnode-install/releases/latest/download/rocketpool-cli-darwin-amd64 -o /usr/local/bin/rocketpool && chmod +x /usr/local/bin/rocketpool

3. Install smart node service

To install the smart node service, run:

rocketpool service install

Note: If you’re using Ubuntu, Debian, CentOS or Fedora, the above will automatically install docker engine and docker-compose on your system. If automatic dependency installation is not supported on your platform (this is the case for MacOS for example), run rocketpool service install -d instead.

4. Configure smart node client

Now you're ready to configure the smart node client:

rocketpool service config

You’ll be prompted to select an eth1 and eth2 client to run. If you like, you can use Infura instead of running an eth1 client.

The default is to select a random client for you, so make sure you select Nimbus!

5. Start Rocket Pool

To start Rocket Pool, open a new shell session and run:

rocketpool service start

You should see the following:

Starting rocketpool_eth1 ... done
Starting rocketpool_api  ... done
Starting rocketpool_eth2 ... done
Starting rocketpool_watchtower ... done
Starting rocketpool_node       ... done
Starting rocketpool_validator  ... done

Note: Docker will make sure that Rocket Pool keeps running, even if Nimbus crashes or you restart your computer.

6. Check Nimbus is running correctly

To ensure Nimbus is running correctly, run:

rocketpool service logs eth2

Nimbus will print lines that look like this:

eth2_1        | INF 2021-02-21 06:35:43.302+00:00 Slot start                                 topics="beacnde" tid=1 file=nimbus_beacon_node.nim:940 lastSlot=682377 scheduledSlot=682378 delay=302ms641us581ns peers=47 head=f752f69a:745 headEpoch=23 finalized=2717f624:672 finalizedEpoch=21 sync="PPUPPPDDDD:10:2.0208:1.5333:01d20h29m (736)"
eth2_1        | INF 2021-02-21 06:35:43.568+00:00 Slot end

The time towards the end (01d20h29m) tells you how long Nimbus thinks it will be until you're fully synced.

7. Create a Rocket Pool wallet

Now that Nimbus is syncing, you're ready to create a Rocket Pool wallet to create and hold your validator keys:

rocketpool wallet init

8. Find your node address

You'll need to find your node address in order to be able to request Goerli ETH:

rocketpool node status

9. Request Goerli ETH

Request 35 Goerli ETH from the faucet to the address you found in the previous step.

Note: you'll need slightly more than 32 ETH since you'll also need to interact with the Rocket Pool smart contracts to request RPL.

10. Request Goerli RPL

You'll also need some RPL. To request RPL directly from the Rocket Pool faucet, run:

rocketpool faucet withdraw-rpl

11. Register your node

Now you're finally ready to register your node with Rocket Pool:

rocketpool node register

12. Make a deposit

The final step is to deposit 32 ETH to initialise your validator (don't worry you'll get half of it back):

rocketpool node deposit

Note: You’ll see a prompt that will ask you to select the amount of ETH you wish to deposit. Select 32 ETH to ensure you can start staking ASAP. At some point (shouldn't take more than 24 hours) you'll be assigned an additional 16 ETH to manage from Rocket Pool stakers: at this stage you'll be able to ask for a 16 ETH refund using rocketpool minipool refund.

That’s it! You’re officially part of the Rocket Pool network!

Tip: Once Nimbus is synced, you'll be able to check on the status of your minipool by running:

rocketpool minipool status

Key resources / further reading

Prater testnet: what you should know








The latest Eth2 testnet, Prater, is now open to the public.

Prater's objective is to ensure that the network remains stable under a higher load than we've seen so far on mainnet -- the genesis count for Prater was 210k (almost double the size of the Beacon Chain Mainnet).

To elaborate a little, we want to make sure that the network is able to function properly with considerably more validators: increasing the number of validators increases the state size, increases the amount of work done to process that state, and increases the number of messages being gossipped on the network; blocks also become fuller, which explores a new kind of constraint as clients need to optimise better for attestation inclusion.

Both Pyrmont and Prater will co-exist for the foreseeable future (we will be testing the Altair fork on Pyrmont, for example). However, in the medium term we expect Prater to replace Pyrmont.

If you're already validating with Nimbus, you should start thinking about transitioning from Pyrmont to Prater at some point over the next few weeks. However, there is no immediate rush, so please do so at your own convenience. If you're new to Nimbus then you could try starting directly with Prater.

Install dependencies

The Nimbus beacon chain can run on Linux, macOS, Windows, and Android. At the moment, Nimbus has to be built from source, which means you'll need to install some dependencies.

Time

The beacon chain relies on your computer having the correct time set (plus or minus 0.5 seconds).

We recommended you run a high quality time service on your computer such as:

At a minimum, you should run an NTP client on the server.

Note: Most operating systems (including macOS') automatically sync with NTP by default.

If the above sounds like latin to you, don't worry. You should be fine as long as you haven't messed around with the time and date settings on your computer (they should be set automatically).

External Dependencies

  • Developer tools (C compiler, Make, Bash, Git)

Nimbus will build its own local copy of Nim, so Nim is not an external dependency,

Linux

On common Linux distributions the dependencies can be installed with

# Debian and Ubuntu
sudo apt-get install build-essential git

# Fedora
dnf install @development-tools

# Archlinux, using an AUR manager
yourAURmanager -S base-devel

macOS

Assuming you use Homebrew to manage packages

brew install cmake

Windows

To build Nimbus on windows, the Mingw-w64 build environment is recommended.

Install Mingw-w64 for your architecture using the "MinGW-W64 Online Installer":

  • select your architecture in the setup menu (i686 on 32-bit, x86_64 on 64-bit)
  • set threads to win32
  • set exceptions to "dwarf" on 32-bit and "seh" on 64-bit.
  • Change the installation directory to C:\mingw-w64 and add it to your system PATH in "My Computer"/"This PC" -> Properties -> Advanced system settings -> Environment Variables -> Path -> Edit -> New -> C:\mingw-w64\mingw64\bin (C:\mingw-w64\mingw32\bin on 32-bit)

Install Git for Windows and use a "Git Bash" shell to clone and build nimbus-eth2.

Android

  • Install the Termux app from FDroid or the Google Play store
  • Install a PRoot of your choice following the instructions for your preferred distribution. Note, the Ubuntu PRoot is known to contain all Nimbus prerequisites compiled on Arm64 architecture (the most common architecture for Android devices).

Assuming you use Ubuntu PRoot

apt install build-essential git

Build the beacon node

The beacon node connects to the eth2 network, manages the blockchain, and provides API's to interact with the beacon chain.

Importantly, you need to have built the beacon node in order to be able to import your keys.

Todo: explain relationship between beacon node and validator client

Prerequisites

Before building and running the application, make sure you've gone through the installed the required dependencies.

Building the node

1. Clone the nim beacon chain repository

git clone https://github.com/status-im/nimbus-eth2
cd nimbus-eth2

2. Run the beacon node build process

To build the Nimbus beacon node and it's dependencies, run:

make nimbus_beacon_node

Updating the node

Make sure you stay on the lookout for any critical updates to Nimbus and keep your node updated.

Run an eth1 node

In order to process incoming validator deposits from the eth1 chain, you'll need to run an eth1 client in parallel to your eth2 client.

Validators are responsible for including new deposits when they propose blocks. And an eth1 client is needed to ensure your validator performs this task correctly.

On this page we provide instructions for using Geth (however, any reputable eth1 client should do the trick).

Note: If you're running on a resource-restricted device like a Raspberry Pi, we recommend setting up a personal Infura endpoint instead as a stop-gap solution. As it stands it may be a little complicated to run a full Geth node on a Pi (and light mode doesn't give you the deposit data you need).

In the medium term (3-6 months), we expect someone (perhaps us) will build a thin layer on top of plain Eth1 header-syncing light clients to address this issue. Specifically, what's missing is a gossip network broadcasting deposit proofs (i.e. deposits and corresponding Merkle proofs rooted in Eth1 headers). When that happens, you should be able to swap out Infura.

However, if you have a > 500GB SSD, and your hardware can handle it, we strongly recommend running your own eth1 client. This will help ensure the network stays as decentralised as possible.

1. Install Geth

If you're running MacOS, follow the instructions listed here to install geth. Otherwise see here.

2. Start Geth

Once you have geth installed, use the following command to start your eth1 node:

Testnet

geth --goerli --ws

Mainnet

geth --ws

Note: The --ws flag is needed to enable the websocket RPC API. This allows Nimbus to query the eth1 chain using Web3 API calls.

3. Leave Geth running

Let it sync - Geth uses a fast sync mode by default. It may take anywhere between a few hours and a couple of days.

N.B. It is safe to run Nimbus and start validating even if Geth hasn't fully synced yet.

You'll know Geth has finished syncing, when you start seeing logs that look like the following:

INFO [05-29|01:14:53] Imported new chain segment               blocks=1 txs=2   mgas=0.043  elapsed=6.573ms   mgasps=6.606   number=3785437 hash=f72595…c13f23
INFO [05-29|01:15:08] Imported new chain segment               blocks=1 txs=3   mgas=0.067  elapsed=7.639ms   mgasps=8.731   number=3785441 hash=be7e55…a8c1c7
INFO [05-29|01:15:25] Imported new chain segment               blocks=1 txs=21  mgas=1.084  elapsed=33.610ms  mgasps=32.264  number=3785442 hash=fd54be…79b047
INFO [05-29|01:15:42] Imported new chain segment               blocks=1 txs=26  mgas=0.900  elapsed=26.209ms  mgasps=34.335  number=3785443 hash=2504ff…119622
INFO [05-29|01:15:59] Imported new chain segment               blocks=1 txs=12  mgas=1.228  elapsed=22.693ms  mgasps=54.122  number=3785444 hash=951dfe…a2a083
INFO [05-29|01:16:05] Imported new chain segment               blocks=1 txs=3   mgas=0.065  elapsed=5.885ms   mgasps=11.038  number=3785445 hash=553d9e…fc4547
INFO [05-29|01:16:10] Imported new chain segment               blocks=1 txs=0   mgas=0.000  elapsed=5.447ms   mgasps=0.000   number=3785446 hash=5e3e7d…bd4afd
INFO [05-29|01:16:10] Imported new chain segment               blocks=1 txs=1   mgas=0.021  elapsed=7.382ms   mgasps=2.845   number=3785447 hash=39986c…dd2a01
INFO [05-29|01:16:14] Imported new chain segment               blocks=1 txs=11  mgas=1.135  elapsed=22.281ms  mgasps=50.943  number=3785444 hash=277bb9…623d8c

Start syncing

If you're joining a network that has already launched, you need to ensure that your beacon node is completely synced before submitting your deposit.

This is particularly important if you are joining a network that's been running for a while.

Testnet

To start syncing the pyrmont testnet , from the nimbus-eth2 repository, run:

 ./run-pyrmont-beacon-node.sh 

Mainnet

To start syncing the eth2 mainnet, while monitoring the eth1 mainnet chain for deposits, run:

 ./run-mainnet-beacon-node.sh --web3-url="ws://127.0.0.1:8546"

Note, the above command assumes you are running a local geth instance. Geth accepts connections from the loopback interface (127.0.0.1), with default WebSocket port 8546. This means that your default Web3 provider URL should be: ws://127.0.0.1:8546

N.B. If you're using your own Infura endpoint, you should enter that instead.

You should see the following output:

INF 2020-12-01 11:25:33.487+01:00 Launching beacon node
...
INF 2020-12-01 11:25:34.556+01:00 Loading block dag from database            topics="beacnde" tid=19985314 file=nimbus_beacon_node.nim:198 path=build/data/shared_pyrmont_0/db
INF 2020-12-01 11:25:35.921+01:00 Block dag initialized
INF 2020-12-01 11:25:37.073+01:00 Generating new networking key
...
NOT 2020-12-01 11:25:59.512+00:00 Eth1 sync progress                         topics="eth1" tid=21914 file=eth1_monitor.nim:705 blockNumber=3836397 depositsProcessed=106147
NOT 2020-12-01 11:26:02.574+00:00 Eth1 sync progress                         topics="eth1" tid=21914 file=eth1_monitor.nim:705 blockNumber=3841412 depositsProcessed=106391
...
INF 2020-12-01 11:26:31.000+00:00 Slot start                                 topics="beacnde" tid=21815 file=nimbus_beacon_node.nim:505 lastSlot=96566 scheduledSlot=96567 beaconTime=1w6d9h53m24s944us774ns peers=7 head=b54486c4:96563 headEpoch=3017 finalized=2f5d12e4:96479 finalizedEpoch=3014
INF 2020-12-01 11:26:36.285+00:00 Slot end                                   topics="beacnde" tid=21815 file=nimbus_beacon_node.nim:593 slot=96567 nextSlot=96568 head=b54486c4:96563 headEpoch=3017 finalizedHead=2f5d12e4:96479 finalizedEpoch=3014
...

Command line options

You can pass any nimbus_beacon_node options to the pyrmont and mainnet scripts. For example, if you wanted to launch Nimbus on pyrmont with a different base port, say 9100, you would run:

./run-pyrmont-beacon-node.sh --tcp-port=9100 --udp-port=9100

To see a list of the command line options availabe to you, with descriptions, navigate to the build directory and run:

./nimbus_beacon_node --help

Make a deposit for your validator

The easiest way to get your deposit in is to follow the Ethereum Foundation's launchpad instructions here:

Testnet: https://pyrmont.launchpad.ethereum.org/

Mainnet: https://launchpad.ethereum.org/

⚠️ If you are making a mainnet deposit make sure you verify that the deposit contract you are interacting with is the correct one.

You should verify that the address is indeed: 0x00000000219ab540356cBB839Cbe05303d7705Fa

You may notice that there have been considerable improvements to the launchpad process since the summer.

In particular, the Key Generation section is now much clearer, and you no longer have to install dependencies to get the command line app working.

We won't elaborate on each individual step here, since they are well explained on the site itself. However, there are two points of note:

1. Eth1 connection

In the Select Client section you'll first be asked to choose an eth1 client. You need to run an eth1 client in order to process incoming validator deposits from the eth1 chain.

We recommend you choose Go Ethereum (or Geth).

If you've followed the book up to this point, you should already have geth up and running.

2. Block explorer

Once you've sent off your transaction, you should see the following screen.

We recommend you click on Beaconchain. This will open up a window that allows you to keep track of your validator's status.

It's a good idea to bookmark this page.

Expected waiting time (the queue)

Once you send off your transaction(s), your validator will be put in a queue based on deposit time. Getting through the queue may take a few hours or days (assuming the chain is finalising). No validators are accepted into the validator set while the chain isn't finalising.

Import your validator keys

To import your signing key(s) into Nimbus, copy the validator_keys directory -- the directory that was created for you when you generated your keys -- into nimbus-eth2. Then run:

Pyrmont

build/nimbus_beacon_node deposits import --data-dir=build/data/shared_pyrmont_0

Mainnet

build/nimbus_beacon_node deposits import --data-dir=build/data/shared_mainnet_0

Note: You can also specify a different path to your validator_keys directory as follows:

Pyrmont

build/nimbus_beacon_node deposits import  --data-dir=build/data/shared_pyrmont_0 "<YOUR VALIDATOR KEYS DIRECTORY>"

Mainnet

build/nimbus_beacon_node deposits import  --data-dir=build/data/shared_mainnet_0 "<YOUR VALIDATOR KEYS DIRECTORY>"

Replacing <YOUR VALIDATOR KEYS DIRECTORY> with the full pathname of the validator_keys directory that was created when you generated your keys using the command line app.

Tip: run pwd in your validator_keys directory to print the full pathname to the console (if you're on Windows, run cd instead).

You'll be asked to enter the password you created to encrypt your keystore(s).

Don't worry, this is entirely normal. Your validator client needs both your signing keystore(s) and the password encrypting it to import your key (since it needs to decrypt the keystore in order to be able to use it to sign on your behalf).

Storage

When you import your keys into Nimbus, your validator signing key(s) are stored in the build/data/shared_<TESTNET OR MAINNET>_0/ folder, under secrets and validators - make sure you keep these folders backed up somewhere safe.

The secrets folder contains the common secret that gives you access to all your validator keys.

The validators folder contains your signing keystore(s) (encrypted keys). Keystores are used by validators as a method for exchanging keys. For more on keys and keystores, see here.

Note: The Nimbus client will only ever import your signing key -- in any case, if you used the deposit launchpad, this is the only key you should have (thanks to the way these keys are derived, you can generate the withdrawal key from your mnemonic whenever you wish to withdraw).

Export

Todo

Connect your validator to eth2

Pyrmont

To connect your validator to the Pyrmont testnet, from the nimbus-eth2 repository run:

 ./run-pyrmont-beacon-node.sh

Mainnet

To connect your validator to mainnet, from the nimbus-eth2 repository run:

./run-mainnet-beacon-node.sh

In both cases, you'll be asked to enter your Web3 provider URL again.

Note: If your beacon node is already running, you'll need to shut it down gracefully (Ctrl+c) and re-run the above command.

Your beacon node will launch and connect your validator the eth2 network. To check that this has happened correctly, check your logs for the following:

INF 2020-11-18 11:20:00.181+01:00 Launching beacon node 
...
NOT 2020-11-18 11:20:02.091+01:00 Local validator attached

Keep an eye on your validator

The best way to keep track of your validator's status is using the beaconcha.in explorer (click on the orange magnifying glass at the very top and paste in your validator's public key):

If you deposit after the genesis state was decided, your validator(s) will be put in a queue based on deposit time, and will slowly be inducted into the validator set after genesis. Getting through the queue may take a few hours or a day or so.

You can even create an account (testnet link, mainnet link) to add alerts and keep track of your validator's performance (testnet link, mainnet link).


Make sure your validator is attached

On startup, you should see a log message that reads Local validator attached. This has a pubKey field which should the public key of your validator.

Check your IP address

Check that Nimbus has recognised your external IP properly. To do this, look at the end of the first log line:

Starting discovery node","topics":"discv5","tid":2665484,"file":"protocol.nim:802","node":"b9*ee2235:<IP address>:9000"

<IP address> should match your external IP (the IP by which you can be reached from the internet).

Note that the port number is displayed directly after the IP -- in the above case 9000. This is the port that should be opened and mapped.

Keep track of your syncing progress

To keep track of your syncing progress, have a look at the output at the very bottom of the terminal window in which your validator is running. You should see something like:

peers: 35 ❯ finalized: ada7228a:8765 ❯ head: b2fe11cd:8767:2 ❯ time: 9900:7 (316807) ❯ sync: wPwwwwwDwwDPwPPPwwww:7:4.2313:4.0627:03h01m(280512)

Where:

  • peers tells you how many peers you're currently connected to (in the above case, 35 peers)
  • finalized tells you the most recent finalized epoch you've synced to so far (the 8765th epoch)
  • head tells you the most recent slot you've synced to so far (the 2nd slot of the 8767th epoch)
  • time tells you the current time since Genesis (the 7th slot of the 9900th epoch -- or equivalently, the 316,807th slot)
  • sync tells you how fast you're syncing right now (4.2313 blocks per second), your average sync speed since you stared (4.0627 blocks per second), the time left until you're fully synced (3 hours and 1 min) how many blocks you've synced so far (280,512), along with information about 20 sync workers linked to the 20 most performant peers you are currently connected to (represented by a string of letters and a number).

The string of letters -- what we call the sync worker map (in the above case represented by wPwwwwwDwwDPwPPPwwww) represents the status of the sync workers mentioned above, where:

    s - sleeping (idle),
    w - waiting for a peer from PeerPool,
    R - requesting blocks from peer
    D - downloading blocks from peer
    P - processing/verifying blocks
    U - updating peer's status information

The number following it (in the above case represented by 7) represents the number of workers that are currently active (i.e not sleeping or waiting for a peer).

Note: If you're running Nimbus as a service, the above status bar won't be visible to you. You can use you the RPC calls outlined in the API page to retrieve similar information.

Keep Nimbus updated

Make sure you stay on the lookout for any critical updates to Nimbus. This best way to do so is through the announcements channel on our discord.

To update to the latest version, run:

git pull && make update

Followed by:

make nimbus_beacon_node

to rebuild the beacon node.

⚠️ In order to minimise downtime, we recommend updating and rebuilding the beacon node before restarting.

Note: If your beacon node is already running, you'll need to disconnect and reconnect for the changes to take effect.

Prepare for Mainnet

Latest software

Please check that you are running the latest stable Nimbus software release.

Note: If you are setting up your client before launch, it is your responsibility to check for any new software releases in the run up to launch. At the minimum you should check the release page weekly.

More than 20 peers

Please check that your node has at least 15 peers. See the footer at the bottom of the terminal window for your peer count.

Validator attached

Please check that your validator is attached to your node.

VPN

To avoid exposing your validator identity to the network, we recommend you use a trustworthy VPN such as protonmail.. This help reduce the risk of revealing your IP address to the network.

Ethereum Foundation's Checklist

Ad a final check, we recommend you also go through the EF'S staker checklist.

Email notifications

You can create an account on beaconcha.in to set up email notifications in case your validator loses balance (goes offline), or gets slashed.

Tip: If your validator loses balance for two epochs in a row, you may want to investigate. It's a strong signal that it may be offline.

1. Sign up at beaconcha.in/register

3. Click on the bookmark icon

4. Tick the boxes and select Add To Watchlist

Graffiti

You can use your node's graffiti flag to make your mark on history and forever engrave some words of your choice into an Ethereum block. You will be able to see it using the block explorer.

To do so on Pyrmont, run:

./run-pyrmont-beacon-node.sh --graffiti="<YOUR_WORDS>"

To do so on Mainnet, run:

./run-mainnet-beacon-node.sh --graffiti="<YOUR_WORDS>"

Nimbus binaries

Nimbus binaries exist for Nimbus -- initially Linux AMD64, ARM32 and ARM64, and Windows -- but MacOS binaries will be added in the future.

You can find the latest release here: https://github.com/status-im/nimbus-eth2/releases

Scroll to the bottom of the first release you see, and click on Assets. You should see a list that looks like the following:

Click on the tar.gz file that corresponds to your OS and architecture, unpack the archive, read the README and run the binary directly or through some provided wrapper script.

We've designed the build process to be reproducible. In practice, this means that anyone can verify that these exact binaries were produced from the corresponding source code commits. For more about the philosophy and importance of this feature see reproducible-builds.org.

For instructions on how to reproduce the build, see here.

Docker images

Docker images for end-users are generated and published automatically to Docker Hub from the Nimbus-eth2 CI, by a GitHub action, whenever a new release is tagged in Git.

We have version-specific Docker tags (statusim/nimbus-eth2:amd64-v1.2.3) and a tag for the latest image (statusim/nimbus-eth2:amd64-latest).

These images are simply the contents of release tarballs inside a "debian:bullseye-slim" image, running under a user imaginatively named "user", with UID:GID of 1000:1000.

The unpacked archive is in "/home/user/nimbus-eth2" which is also the default WORKDIR. The default ENTRYPOINT is the binary itself: "/home/user/nimbus-eth2/build/nimbus_beacon_node".

Usage

You need to create an external data directory and mount it as a volume inside the container, with the mounting point being "/home/user/nimbus-eth2/build/data".

mkdir data
docker run -it --rm -v ${PWD}/data:/home/user/nimbus-eth2/build/data statusim/nimbus-eth2:amd64-latest [nimbus_beacon_node args here]

Or you can use a wrapper script instead:

mkdir data
docker run -it --rm -v ${PWD}/data:/home/user/nimbus-eth2/build/data -e WEB3_URL="wss://mainnet.infura.io/ws/v3/YOUR_TOKEN" --entrypoint /home/user/nimbus-eth2/run-mainnet-beacon-node.sh statusim/nimbus-eth2:amd64-latest [nimbus_beacon_node args here]

Better yet, use docker-compose, with one of the example configuration files as a base for your custom configuration:

mkdir data
docker-compose -f docker-compose-example1.yml up --quiet-pull --no-color --detach

The rather voluminous logging is done on stdout, so you might want to change the system-wide Docker logging defaults (dumping everything in "/var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log") to something like "syslog". Make sure there's some log rotation system in use and the intervals make sense for these large logs.

Troubleshooting

⚠️ The commands on this page refer to the Pyrmont testnet. If you're running mainnet, replace pyrmont with mainnet in the commands below.

As it stands, we are continuously making improvements to both stability and memory usage. So please make sure you keep your client up to date! This means restarting your node and updating your software regularly from the master branch. If you can't find a solution to your problem here, feel free to hit us up on our discord!

Note: While the stable branch of the nimbus-eth2 repository is more stable, the latest updates happen in the unstable branch which is (usually) merged into master every week on Tuesday. If you choose to run Nimbus directly from the unstable branch, be prepared for instabilities!

To update and restart, run git pull, make update, followed by make nimbus_beacon_node:

cd nimbus-eth2
git pull
make update # Update dependencies
make nimbus_beacon_node # Rebuild beacon node
./run-pyrmont-beacon-node.sh # Restart using same keys as last run

If you find that make update causes the console to hang for too long, try running make update V=1 or make update V=2 instead (these will print a more verbose output to the console which may make it easier to diagnose the problem).

Note: rest assured that when you restart the beacon node, the software will resume from where it left off, using the validator keys you have already imported.

Starting over

The directory that stores the blockchain data of the testnet is build/data/pyrmont_shared_0 (if you're connecting to another testnet, replace pyrmont with that testnet's name). If you've imported the wrong keys, and wish to start over, delete this repository.

Syncing

If you’re experiencing sync problems, we recommend running make clean-pyrmont to delete the database and restart your sync (make sure you’ve updated to the latest master first though).

Warning: make clean-pyrmont will erase all of your syncing progress so far, so it should only be used as a last resort -- if your client gets stuck for a long time (because it's unable to find the right chain and/or stay with the same head value) and a normal restart doesn't improve things.

Pruning the database

If you're running out of storage, you can prune the database of unnecessary blocks and states by running:

make ncli_db
build/ncli_db pruneDatabase --db=build/data/shared_pyrmont_0/db --verbose=true

This will create nbc_pruned.sqlite3 files in build/data/shared_pyrmont_0/db, which you can use in place of the orginal nbc.sqlite3 files. We recommend you hold onto the originals until you've verified that your validator is behaving as expected with the pruned files.

Options:

  • --keepOldStates (boolean): Keep pre-finalisation states; defaults to true.
  • --verbose (boolean): Print a more verbose output to the console; defaults to false.

Low peer counts

If you're experiencing a low peer count, you may be behind a firewall. Try restarting your client and passing --nat:extip:$EXT_IP_ADDRESS as an option to ./run-pyrmont-beacon-node.sh, where $EXT_IP_ADDRESS is your real IP. For example, if your real IP address is 35.124.65.104, you'd run:

./run-pyrmont-beacon-node.sh --nat:extip:35.124.65.104

Address already in use error

If you're seeing an error that looks like:

Error: unhandled exception: (98) Address already in use [TransportOsError]

It's probably because you're running multiple validators -- and the default base port 9000 is already in use.

To change the base port, run:

./run-pyrmont-beacon-node.sh --tcp-port=9100 --udp-port=9100

(You can replace 9100 with a port of your choosing)

Catching up on validator duties

If you're being flooded with Catching up on validator duties messages, then your CPU is probably too slow to run Nimbus. Please check that your setup matches our system requirements.

Local timer is broken error

If you cannot start your validator because you are seeing logs that look like the following:

WRN 2021-01-08 06:32:46.975+00:00 Local timer is broken or peer's status information is invalid topics="beacnde" tid=120491 file=sync_manager.nim:752 wall_clock_slot=271961 remote_head_slot=271962 local_head_slot=269254 peer=16U*mELUgu index=0 tolerance_value=0 peer_speed=2795.0 peer_score=200

This is likely due to the fact that your local clock is off. To compare your local time with a internet time, run:

cat </dev/tcp/time.nist.gov/13 ; date -u 

The first line in the output will give you internet time. And the second line will give you the time according to your machine. These shouldn't be more than a second apart.

Eth1 chain monitor failure

todo

Recover lost keys and generate new ones

Your mnemonic can be used to recover lost keys and generate new ones.

Every time you generate a keystore from your mnemomic, that keystore is assigned an index. The first keystore you generate has index 0, the second index 1, etc. You can recover any key using your mnemonic and that key's index. For more on how keys are derived, see this excellent post.

To stay consistent with the rest of the book, we'll take you though how to do this using the deposit-cli's binary executable.

Specifically, we'll be using the existing-mnemonic command. Here's a description of the command from the deposit-cli's README:

This command is used to re-generate or derive new keys from your existing mnemonic. Use this command, if (i) you have already generated keys with this CLI before, (ii) you want to reuse your mnemonic that you know is secure that you generated elsewhere (reusing your eth1 mnemonic .etc), or (iii) you lost your keystores and need to recover your keys.

Recover existing key

⚠️ Recovering validator keys from a mnemonic should only be used as a last resort. Exposing your mnemonic to a computer at any time puts it at risk of being compromised. Your mnemonic is not encrypted and if leaked, can be used to steal your funds.

N.B. the commands below assume you are trying to recover your original key, hence --validator_start_index has been set to 0.

Run the following command from the directory which contains the deposit executable:

Pyrmont

./deposit existing-mnemonic --validator_start_index 0 --num_validators 1 --chain pyrmont

Mainnet

./deposit existing-mnemonic --validator_start_index 0 --num_validators 1 --chain mainnet

You'll be prompted to enter your mnemonic, and a new password for your keystore.

Check that the validator_keys directory contains your extra keystore.

Copy the validator_keys directory to nimbus-eth2 and then follow the instructions here. Your key will be added to your node on next restart.

Generate another key

⚠️ If you wish to generate another validator key, you must take great care to not generate a copy of your original key. Running the same key on two different validator clients will likely get you slashed.

N.B. the commands below assume you already have one key and wish to generate a second, hence --validator_start_index has been set to 0.

Run the following command from the directory which contains the deposit executable:

Pyrmont

./deposit existing-mnemonic --validator_start_index 1 --num_validators 1 --chain pyrmont

Mainnet

./deposit existing-mnemonic --validator_start_index 1 --num_validators 1 --chain mainnet

You'll be prompted to enter your mnemonic, and a new password for your keystore.

Check that the validator_keys directory contains an extra keystore.

Copy the validator_keys directory to nimbus-eth2.

Make sure you've made a deposit for your new keystore, and then follow the instructions here. Your key will be added to your node on next restart.

Perform a voluntary exit

⚠️ Voluntary exits are irreversible. You won't be able to validate again with the same key. And you won't be able to withdraw your stake until the Eth1 and Eth2 merge. Note that voluntary exits won't be processed if the chain isn't finalising.

To perform a voluntary exit, with your beacon node running, run:

Pyrmont

build/nimbus_beacon_node deposits exit --validator=<VALIDATOR_PUBLIC_KEY> --data-dir=build/data/shared_pyrmont_0

Mainnet

build/nimbus_beacon_node deposits exit --validator=<VALIDATOR_PUBLIC_KEY> --data-dir=build/data/shared_mainnet_0

Note: Make sure your <VALIDATOR_PUBLIC_KEY> is prefixed with 0x. In other words the public key should look like 0x95e3...

Set up a systemd service

This page will take you through how to set up a systemd service for your beacon node.

Systemd is used in order to have a command or program run when your device boots (i.e. add it as a service). Once this is done, you can start/stop enable/disable from the linux prompt.

systemd is a service manager designed specifically for Linux. There is no port to Mac OS. You can get more information from https://www.raspberrypi.org/documentation/linux/usage/systemd.md or https://fedoramagazine.org/what-is-an-init-system/

1. Create a systemd service

⚠️ If you wish to run the service with metrics enabled, you'll need to replace --metrics:off with --metrics:on in the service file below. See here for more on metrics.

Create a systemd service unit file -- nimbus-eth2-pyrmont.service -- and save it in /etc/systemd/system/.

[Unit]
Description=Nimbus beacon node

[Service]
WorkingDirectory=<BASE-DIRECTORY>
ExecStart=<BASE-DIRECTORY>/build/nimbus_beacon_node \
  --non-interactive \
  --network=pyrmont \
  --data-dir=build/data/shared_pyrmont_0 \
  --web3-url=<WEB3-URL> \
  --rpc:on \
  --metrics:off
User=<USERNAME>
Group=<USERNAME>
Restart=always

[Install]
WantedBy=default.target

Replace:

<BASE-DIRECTORY> with the location of the repository in which you performed the git clone command in step 1.

<USERNAME> with the username of the system user responsible for running the launched processes.

<WEB3-URL> with the WebSocket JSON-RPC URL that you are planning to use.

2. Notify systemd of the newly added service

sudo systemctl daemon-reload

3. Start the service

sudo systemctl enable nimbus-eth2-pyrmont --now

Log rotation

Nimbus logs are written to stdout, and optionally to a file. Writing to a file for a long-running process may lead to difficulties when the file grows large. This is typically solved with a log rotator. A log rotator is responsible for switching the written-to file, as well as compressing and removing old logs.

Using "logrotate"

logrotate provides log rotation and compression. The corresponding package will install its Cron hooks (or Systemd timer) -- all you have to do is add a configuration file for Nimbus-eth2 in "/etc/logrotate.d/nimbus-eth2":

/var/log/nimbus-eth2/*.log {
	compress
	missingok
	copytruncate
}

The above assumes you've configured Nimbus-eth2 to write its logs to "/var/log/nimbus-eth2/" (usually by redirecting stout and stderr from your init script).

"copytruncate" is required because, when it comes to moving the log file, logrotate's default behaviour requires application support for re-opening that log file at runtime (something which is currently lacking). So, instead of a move, we tell logrotate to do a copy and a truncation of the existing file. A few log lines may be lost in the process.

You can control rotation frequency and the maximum number of log files kept by using the global configuration file - "/etc/logrotate.conf":

# rotate daily
daily
# only keep logs from the last 7 days
rotate 7

Using "rotatelogs"

rotatelogs is available on most servers and can be used with Docker, Systemd and manual setups to write rotated logs files.

In particular, when Systemd and its accompanying Journald log daemon are used, this setup avoids clogging the system log by keeping the Nimbus logs in a separate location.

Compression

rotatelogs works by reading stdin and redirecting it to a file based on a name pattern. Whenever the log is about to be rotated, the application invokes a shell script with the old and new log files. Our aim is to compress the log file to save space. The Nimbus-eth2 repo provides a helper script that does this:

# Create a rotation script for rotatelogs
cat << EOF > rotatelogs-compress.sh
#!/bin/sh

# Helper script for Apache rotatelogs to compress log files on rotation - `$2` contains the old log file name

if [ -f "$2" ]; then
    # "nice" prevents hogging the CPU with this low-priority task
    nice gzip -9 "$2"
fi
EOF

chmod +x rotatelogs-compress.sh

Build

Logs in files generally don't benefit from colors. To avoid colors being written to the file, additional flags can be added to the Nimbus build process -- these flags are best saved in a build script to which one can add more options. Future versions of Nimbus will support disabling colors at runtime.


# Build nimbus with colors disabled
cat << EOF > build-nbc-text.sh
#!/bin/bash
make NIMFLAGS="-d:chronicles_colors=off -d:chronicles_sinks=textlines" nimbus_beacon_node
EOF

Run

The final step is to redirect logs to rotatelogs using a pipe when starting Nimbus:

build/nimbus_beacon_node \
  --network:pyrmont \
  --web3-url="$WEB3URL" \
  --data-dir:$DATADIR 2>&1 | rotatelogs -L "$DATADIR/nbc_bn.log" -p "/path/to/rotatelogs-compress.sh" -D -f -c "$DATADIR/log/nbc_bn_%Y%m%d%H%M%S.log" 3600

The options used in this example do the following:

  • -L nbc_bn.log - symlinks to the latest log file, for use with tail -F
  • -p "/path/to/rotatelogs-compress.sh" - runs rotatelogs-compress.sh when rotation is about to happen
  • -D - creates the log directory if needed
  • -f - opens the log immediately when starting rotatelogs
  • -c "$DATADIR/log/nbc_bn_%Y%m%d%H%M%S.log" - includes timestamp in log filename
  • 3600 - rotates logs every hour (3600 seconds)

Deleting old logs

rotatelogs will not do this for you, so you'll need a Cron script (or Systemd timer):

# delete log files older than 7 days
find "$DATADIR/log" -name 'nbc_bn_*.log' -mtime +7 -exec rm '{}' \+

Verify the integrity of Nimbus

We've recently added checksums to the end of our release notes (a practice we will be continuing from now on). Please make sure you get into the habit of verifying these 🙏

For those of you who are unfamiliar, a checksum is a special type of hash used to verify the integrity of a file. Verifying a checksum ensures there was no corruption or manipulation during the download and that the file was downloaded completely and correctly. For a short and simple guide on how to do so, see here.

In the case of the v1.1.0 release for example, the SHA512 checksums are:

# Linux AMD64
8d553ea5422645b5f06001e7f47051706ae5cffd8d88c45e4669939f3abb6caf41a2477431fce3e647265cdb4f8671fa360d392f423ac68ffb9459607eaab462  nimbus_beacon_node
# Linux ARM64
93ffd03a0ce67f7d035e3dc45e97de3c2c9a05a8dd0c6d5f45402ddb04404dc3cf15b80fee972f34152ef171ce97c40f794448bc779ca056081c945f71f19788  nimbus_beacon_node
# Linux ARM
f2e75f3fae2aea0a9f8d45861d52b0e2546c3990f453b509fab538692d18c64e65f58441c5492064fc371e0bc77de6bab970e05394cfd124417601b55cb4a825  nimbus_beacon_node
# Windows AMD64
fd68c8792ea60c2c72e9c2201745f9698bfd1dae4af4fa9e1683f082109045efebd1d80267f13cafeb1cd7414dc0f589a8a73f12161ac2758779369289d5a832  nimbus_beacon_node

Back up your database

The best way to do this is to simply copy it over: you'll find it either in build/data/shared_mainnet_0/db/ (if you're running Pyrmont, shared_pyrmont_0) or the directory you supplied to the --data-dir argument when you launched Nimbus).

Add a backup web3 provider

It's a good idea to add a backup web3 provider in case your main one goes down. You can do this by simply repeating the --web3-url parameter on launch.

For example, if your primary eth1 node is a local Geth, but you want to use Infura as a backup you would run:

./run-mainnet-beacon-node.sh  --web3-url="ws://127.0.0.1:8546" --web3-url="wss://mainnet.infura.io/ws/v3/..."

Add an additional validator

To add an additional validator, just follow the same steps as you did when you added your first.

You'll have to restart the beacon node for the changes to take effect.

Note that a single Nimbus instance is able to handle multiple validators.

Grafana and Prometheus

In this page we'll cover how to use Grafana and Prometheus to help you visualise important real-time metrics concerning your validator and/or beacon node.

Prometheus is an open-source systems monitoring and alerting toolkit. It runs as a service on your computer and its job is to capture metrics. You can find more information about Prometheus here.

Grafana is a tool for beautiful dashboard monitoring that works well with Prometheus. You can learn more about Grafana here.

Simple metrics

Run the beacon node with the --metrics flag:

./run-pyrmont-beacon-node.sh --metrics

And visit http://127.0.0.1:8008/metrics to see the raw metrics. You should see a plaintext page that looks something like this:

# HELP nim_runtime_info Nim runtime info
# TYPE nim_runtime_info gauge
nim_gc_mem_bytes 6275072.0
nim_gc_mem_occupied_bytes 1881384.0
nim_gc_heap_instance_occupied_bytes{type_name="KeyValuePairSeq[digest.Eth2Digest, block_pools_types.BlockRef]"} 25165856.0
nim_gc_heap_instance_occupied_bytes{type_name="BlockRef"} 17284608.0
nim_gc_heap_instance_occupied_bytes{type_name="string"} 6264507.0
nim_gc_heap_instance_occupied_bytes{type_name="seq[SelectorKey[asyncdispatch.AsyncData]]"} 409632.0
nim_gc_heap_instance_occupied_bytes{type_name="OrderedKeyValuePairSeq[Labels, seq[Metric]]"} 122720.0
nim_gc_heap_instance_occupied_bytes{type_name="Future[system.void]"} 79848.0
nim_gc_heap_instance_occupied_bytes{type_name="anon ref object from /Users/hackingresearch/nimbus/clone/nim-beacon-chain/vendor/nimbus-build-system/vendor/Nim/lib/pure/asyncmacro.nim(319, 33)"} 65664.0
nim_gc_heap_instance_occupied_bytes{type_name="anon ref object from /Users/hackingresearch/nimbus/clone/nim-beacon-chain/vendor/nimbus-build-system/vendor/Nim/lib/pure/asyncnet.nim(506, 11)"} 43776.0
nim_gc_heap_instance_occupied_bytes{type_name="seq[byte]"} 37236.0
nim_gc_heap_instance_occupied_bytes{type_name="seq[TrustedAttestation]"} 29728.0

...

Note: Metrics are by default only accessible from the same machine as the beacon node is running on - to fetch metrics from a remote machine, an SSH tunnel is recommended.

Unfortunately, this simple method only offers one snapshot in time (you'll need to keep refreshing to see the data update) which means it's impossible to see a useful history of the metrics. In short, it's far from optimal from an information design point of view.

In order to settle on a better solution, we'll need the help of two external projects -- Prometheus and Grafana.

Prometheus and Grafana

The following steps will take you through how to use Prometheus and Grafana to spin up a beautiful and useful monitoring dashboard for your validator and beacon node.

Steps

1. Download Prometheus

Use your favourite package manager to download Prometheus -- for example apt-get install prometheus on Ubuntu, or brew install prometheus on MacOS, should do the trick.

If you don't use a package manager, you can download the latest release of directly from Prometheus website. To extract it, run:

tar xvfz prometheus-*.tar.gz
cd prometheus-*

2. Copy the binary

The Prometheus server is a single binary called prometheus (or prometheus.exe on Microsoft Windows). Copy it over to /usr/local/bin

cp prometheus-2.20.1.linux-amd64/prometheus /usr/local/bin/

3. Run Prometheus with the default configuration file

Prometheus relies on a YAML configuration file to let it know where, and how often, to scrape data.

Example config file:

global:
  scrape_interval: 12s

scrape_configs:
  - job_name: "nimbus"
    static_configs:
      - targets: ['127.0.0.1:8008']

Save the above as prometheus.yml in the nimbus-eth2 repo.

Then run Prometheus:

prometheus --config.file=./prometheus.yml --storage.tsdb.path=./prometheus

You should see the following confirmation in the logs:

level=info ts=2021-01-22T14:52:10.604Z caller=main.go:673 msg="Server is ready to receive web requests."

4. Download Grafana

Download the latest release of Grafana for your platform. You need version 7.2 or newer.

Note: If you use a package manager, you can also download Grafana that way -- for example apt-get install grafana on Ubuntu, or brew install grafana on MacOS, should do the trick.

5. Install and start Grafana

Follow the instructions for your platform to install and start Grafana.

6. Configure login

Go to http://localhost:3000/, you should see a Grafana login screen that looks like this

Type in admin for both the username and password. You'll be asked to change the password (we recommend you do so).

7. Add a data source

Hover your mouse over the gear icon in the left menu bar, and click on the Data Sources option in the sub-menu that pops up.

Now click on the Add Data Source button in the center of the screen

Select Prometheus

Enter http://localhost:9090 in the URL field

Set the "Scrape interval" field to the same value you used in the Prometheus config ("12" in our example above).

Scroll to the bottom and click on Save and Test

If everything is working correctly you should see a green Data source is working box pop up

8. Import a dashboard

Now, let's import a dashboard; hover your mouse over the + icon in the left menu bar and select import from the pop-up menu

Click on Upload JSON file

Select the beacon_nodes_Grafana_dashboard.json from the nimbus-eth2/grafana/ folder and click on Import

You'll be directed to the dashboard where you'll be able to gain insights into the performance of nimbus-eth2 and your validators

Note: the dashboard is very much a work in progress. Some of the highlights right now include received and proposed blocks, received and sent attestations, peers, memory and cpu usage stats. But keep an eye out for additional metrics in the near future.

And voila! That's all there is to it :)

Joe Clapis

Joe – who’s done some brilliant work integrating Nimbus with Rocket Pool – has created a wonderful guide where he takes you through how to set up a Grafana server on your Pi – using his dashboard as an example.

In his words:

This captures just about every metric I think I’d like to see at a glance.

Whether or not you're running a Pi, we recommend you check out his guide here.

Metanull

A dashboard aimed primarily at users rather than developers.

Note that this dashboard does rely heavily on three prometheus exporter tools: node_exporter for system metrics, json_exporter for ETH price, and blackbox_exporter for ping times.

The good news is that you don't need to use all these tools, as long as you take care of removing the related panels.

See here for a detailed guide explaining how to use it.

Enabling mobile alerts

Telegram

TODO

Supplying your own Infura endpoint

In a nutshell, Infura is a hosted ethereum node cluster that lets you make requests to the eth1 blockchain without requiring you to set up your own eth1 node.

While we do support Infura to process incoming validator deposits, we recommend running your own eth1 node to avoid relying on a third-party-service.

Note: Nimbus currently supports remote Infura nodes and local Geth nodes. In the future, we plan on having our own eth1 client -- Nimbus 1 -- be the recommended default.

1. Visit Infura.io

Go to:

https://infura.io/

and click on Get Started For Free

2. Sign up

Enter your email address and create a password

3. Verify email address

You should have received an email from Infura in your inbox. Open up it up and click on Confirm Email Address

4. Go to dashboard

This will take you to your Infura dashboard (https://infura.io/dashboard/)

5. Create your first project

Click on the first option (create your first project) under Let's Get Started

Choose a name for your project

You'll be directed to the settings page of your newly created project

6. Select endpoint

⚠️ Warning: if you're connecting to mainnet, you should skip this step

If you're connecting to a testnet, in the KEYS section, click on the dropdown menu to the right of ENDPOINTS, and select GÖRLI

7. Copy the websocket endpoint

Copy the address that starts with wss://

⚠️ Warning: make sure you've copied the endpoint that starts withwss (websocket), and not the https endpoint. If you're connecting to mainnet this will read wss://mainnet.infura.io/ws/...

8. Run the beacon node

Launch the beacon node on your favourite testnet, pasaing in your websocket endpoint as the Web3 provider URL.

9. Check stats

Visit your project's stats page to see a summary of your eth1 related activity and method calls

That's all there is to it :)

Network stats and monitoring

⚠️ This page concerns the Pyrmont testnet. eth2stats is a debugging / developer tool that's suitable for testnets. For privacy reasons, we do not recommend using it for mainnet. For a mainnet alternative, see this guide.

eth2stats is a network monitoring suite for your beacon node + validator client.

It consists of a command-line-interface (to help you query your node's API), and an associated website (which allows you to monitor your node from anywhere).

In this guide we'll take you through how to get eth2stats running on your local machine, and how to hook your node up to the website.

Prerequisites

Knowledge of both git and command line basics, and a working Golang environment.

Guide

1. Clone the eth2stats repository

git clone https://github.com/Alethio/eth2stats-client.git

2. Move into the repository

cd eth2stats-client

3. Build the executable

make build

4. Add your node

Go to https://pyrmont.eth2.wtf/

1. Click on add node

2. Configure name and client type

3. Copy the command

Click on Compile from source and copy the command at the bottom.

5. Build and run your node with metrics enabled

From your nimbus-eth2 repository, run:

make nimbus_beacon_node

Followed by:

./run-pyrmont-beacon-node.sh --metrics

6. Run eth2stats

From your eth2stats-client repository, run the command you copied in step 4.3:

./eth2stats-client run \
--eth2stats.node-name="roger" \
--data.folder ~/.eth2stats/data \
--eth2stats.addr="grpc.pyrmont.eth2.wtf:8080" --eth2stats.tls=false \
--beacon.type="nimbus" \
--beacon.addr="http://localhost:9190" \
--beacon.metrics-addr="http://localhost:8008/metrics"

Your node should now be displayed on https://pyrmont.eth2.wtf/ :)

APIs

nimbus-eth2 exposes a collection of APIs for querying the state of the application at runtime.

Note: Where applicable, these APIs mimic the eth2 APIs with the exception that JSON-RPC is used instead of http rest (the method names, parameters and results are all the same except for the encoding / access method).

Introduction

The nimbus-eth2 API is implemented using JSON-RPC 2.0. To query it, you can use a JSON-RPC library in the language of your choice, or a tool like curl to access it from the command line. A tool like jq is helpful to pretty-print the responses.

curl -d '{"jsonrpc":"2.0","id":"id","method":"peers","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq

Before you can access the API, make sure it's enabled using the RPC flag (nimbus_beacon_node --rpc):

     --rpc                     Enable the JSON-RPC server.
     --rpc-port                HTTP port for the JSON-RPC service.
     --rpc-address             Listening address of the RPC server.

One difference is that currently endpoints that correspond to specific ones from the spec are named weirdly - for example an endpoint such as getGenesis is currently named get_v1_beacon_genesis which would map 1:1 to the actual REST path in the future - verbose but unambiguous.

Beacon chain API

get_v1_beacon_genesis

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_genesis","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_states_root

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_root","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_states_fork

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_fork","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_states_finality_checkpoints

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_finality_checkpoints","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_states_stateId_validators

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_stateId_validators","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_states_stateId_validators_validatorId

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_stateId_validators_validatorId","params":["finalized", "100167"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_states_stateId_validator_balances

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_stateId_validator_balances","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_states_stateId_committees_epoch

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_states_stateId_committees_epoch","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_headers

get_v1_beacon_headers_blockId

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_headers_blockId","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

post_v1_beacon_blocks

curl -d '{"jsonrpc":"2.0","method":"post_v1_beacon_blocks","params":[{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body":{"randao_reveal":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","eth1_data":{"deposit_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","deposit_count":"1","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"graffiti":"string","proposer_slashings":[{"signed_header_1":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signed_header_2":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}],"attester_slashings":[{"attestation_1":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"attestation_2":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}}],"attestations":[{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}],"deposits":[{"proof":["0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"data":{"pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a","withdrawal_credentials":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","amount":"1","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}],"voluntary_exits":[{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_blocks_blockId

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_blocks_blockId","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_blocks_blockId_root

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_blocks_blockId_root","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_blocks_blockId_attestations

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_blocks_blockId_attestations","params":["finalized"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

post_v1_beacon_pool_attestations

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_blocks_blockId_attestations","params":[{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_pool_attester_slashings

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_pool_attester_slashings","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

post_v1_beacon_pool_attester_slashings

curl -d '{"jsonrpc":"2.0","method":"post_v1_beacon_pool_attester_slashings","params":[{"attestation_1":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"attestation_2":{"attesting_indices":["1"],"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}}}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_pool_proposer_slashings

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_pool_proposer_slashings","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

post_v1_beacon_pool_proposer_slashings

curl -d '{"jsonrpc":"2.0","method":"post_v1_beacon_pool_proposer_slashings","params":[{"signed_header_1":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signed_header_2":{"message":{"slot":"1","proposer_index":"1","parent_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","body_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_beacon_pool_voluntary_exits

curl -d '{"jsonrpc":"2.0","method":"get_v1_beacon_pool_voluntary_exits","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

post_v1_beacon_pool_voluntary_exits

curl -d '{"jsonrpc":"2.0","method":"post_v1_beacon_pool_voluntary_exits","params":[{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

Beacon Node API

get_v1_node_identity

curl -d '{"jsonrpc":"2.0","method":"get_v1_node_identity","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_node_peers

curl -d '{"jsonrpc":"2.0","method":"get_v1_node_peers","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_node_peers_peerId

curl -d '{"jsonrpc":"2.0","method":"get_v1_node_peers_peerId","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_node_peer_count

curl -d '{"jsonrpc":"2.0","method":"get_v1_node_peer_count","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_node_version

curl -d '{"jsonrpc":"2.0","method":"get_v1_node_version","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_node_syncing

curl -d '{"jsonrpc":"2.0","method":"get_v1_node_syncing","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_node_health

curl -d '{"jsonrpc":"2.0","method":"get_v1_node_health","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

Valdiator API

get_v1_validator_duties_attester

curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_duties_attester","params":[1,["a7a0502eae26043d1ac39a39457a6cdf68fae2055d89c7dc59092c25911e4ee55c4e7a31ade61c39480110a393be28e8","a1826dd94cd96c48a81102d316a2af4960d19ca0b574ae5695f2d39a88685a43997cef9a5c26ad911847674d20c46b75"]],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_validator_duties_proposer

curl -d '{"jsonrpc":"2.0","id":"id","method":"get_v1_validator_duties_proposer","params":[1] }' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_validator_block

curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_block","params":["1","0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","0x4e696d6275732f76312e302e322d64333032633164382d73746174656f667573"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_validator_attestation_data

curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_attestation_data","params":[1, 1],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_validator_aggregate_attestation

curl -d '{"jsonrpc":"2.0","method":"get_v1_validator_aggregate_attestation","params":[1, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

post_v1_validator_aggregate_and_proofs

curl -d '{"jsonrpc":"2.0","method":"post_v1_validator_aggregate_and_proofs","params":[{"message":{"aggregator_index":"1","aggregate":{"aggregation_bits":"0x01","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}}},"selection_proof":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

post_v1_validator_beacon_committee_subscriptions

Config API

get_v1_config_fork_schedule

curl -d '{"jsonrpc":"2.0","method":"get_v1_config_fork_schedule","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_config_spec

curl -d '{"jsonrpc":"2.0","method":"get_v1_config_spec","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_config_deposit_contract

curl -d '{"jsonrpc":"2.0","method":"get_v1_config_deposit_contract","params":[],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

Administrative / Debug API

get_v1_debug_beacon_states_stateId

curl -d '{"jsonrpc":"2.0","method":"get_v1_debug_beacon_states_stateId","params":["head"],"id":1}' -H 'Content-Type: application/json' localhost:9190 -s | jq

get_v1_debug_beacon_heads

Nimbus extensions

getBeaconHead

The latest head slot, as chosen by the latest fork choice.

curl -d '{"jsonrpc":"2.0","id":"id","method":"getBeaconHead","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq

getChainHead

Show chain head information, including head, justified and finalized checkpoints.

curl -d '{"jsonrpc":"2.0","id":"id","method":"getChainHead","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq

getNodeVersion

 curl -d '{"jsonrpc":"2.0","method":"getNodeVersion","params":[],"id":1}' -H 'Content-Type: application/json' localhost:7001 -s | jq

getSpecPreset

 curl -d '{"jsonrpc":"2.0","method":"getSpecPreset","params":[],"id":1}' -H 'Content-Type: application/json' localhost:7001 -s | jq

peers

Show a list of peers in PeerPool.

 curl -d '{"jsonrpc":"2.0","method":"peers","params":[],"id":1}' -H 'Content-Type: application/json' localhost:7001 -s | jq

getSyncing

Shows current state of forward syncing manager.

 curl -d '{"jsonrpc":"2.0","method":"getSyncing","params":[],"id":1}' -H 'Content-Type: application/json' localhost:7001 -s | jq

getNetworkPeerId

Shows current node's libp2p peer identifier (PeerID).

 curl -d '{"jsonrpc":"2.0","method":"getNetworkPeerId","params":[],"id":1}' -H 'Content-Type: application/json' localhost:7001 -s | jq

getNetworkPeers

Shows list of available PeerIDs in PeerPool.

 curl -d '{"jsonrpc":"2.0","method":"getNetworkPeers","params":[],"id":1}' -H 'Content-Type: application/json' localhost:7001 -s | jq

getNetworkEnr

setLogLevel

Set the current logging level dynamically: TRACE, DEBUG, INFO, NOTICE, WARN, ERROR or FATAL

curl -d '{"jsonrpc":"2.0","id":"id","method":"setLogLevel","params":["DEBUG; TRACE:discv5,libp2p; REQUIRED:none; DISABLED:none"] }' -H 'Content-Type: application/json' localhost:9190 -s | jq

setGraffiti

Set the graffiti bytes that will be included in proposed blocks. The graffiti bytes can be specified as an UTF-8 encoded string or as an 0x-prefixed hex string specifying raw bytes.

curl -d '{"jsonrpc":"2.0","id":"id","method":"setGraffiti","params":["Mr F was here"] }' -H 'Content-Type: application/json' localhost:9190 -s | jq

getEth1Chain

Get the list of Eth1 blocks that the beacon node is currently storing in memory.

curl -d '{"jsonrpc":"2.0","id":"id","method":"getEth1Chain","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq '.result'

getEth1ProposalData

Inspect the eth1 data that the beacon node would produce if it was tasked to produce a block for the current slot.

curl -d '{"jsonrpc":"2.0","id":"id","method":"getEth1ProposalData","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq '.result'

getChronosFutures

Get the current list of live async futures in the process - compile with -d:chronosFutureTracking to enable.

curl -d '{"jsonrpc":"2.0","id":"id","method":"getChronosFutures","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq '.result | (.[0] | keys_unsorted) as $keys | $keys, map([.[ $keys[] ]])[] | @csv'

getGossipSubPeers

Get the current list of live async futures in the process - compile with -d:chronosFutureTracking to enable.

curl -d '{"jsonrpc":"2.0","id":"id","method":"getGossipSubPeers","params":[] }' -H 'Content-Type: application/json' localhost:9190 -s | jq '.result'

Command line options

You can pass any nimbus_beacon_node options to the pyrmont and mainnet scripts. For example, if you wanted to launch Nimbus on mainnet with different base ports than the default 9000/udp and 9000/tcp, say 9100/udp and 9100/tcp, you would run:

./run-mainnet-beacon-node.sh --tcp-port=9100 --udp-port=9100

To see a list of the command line options availabe to you, with descriptions, navigate to the build directory and run:

./nimbus_beacon_node --help

You should see the following output:

Usage:

nimbus_beacon_node [OPTIONS]... command

The following options are available:

     --log-level               Sets the log level for process and topics (e.g. "DEBUG;
                               TRACE:discv5,libp2p; REQUIRED:none; DISABLED:none") [=INFO].
     --log-file                Specifies a path for the written Json log file.
     --network                 The Eth2 network to join [=mainnet].
 -d, --data-dir                The directory where nimbus will store all blockchain data.
     --validators-dir          A directory containing validator keystores.
     --secrets-dir             A directory containing validator keystore passwords.
     --wallets-dir             A directory containing wallet files.
     --web3-url                URL of the Web3 server to observe Eth1.
     --non-interactive         Do not display interative prompts. Quit on missing
                               configuration.
     --netkey-file             Source of network (secp256k1) private key file
                               (random|<path>) [=random].
     --insecure-netkey-password  Use pre-generated INSECURE password for network private key
                               file [=false].
     --agent-string            Node agent string which is used as identifier in network.
     --subscribe-all-subnets   Subscribe to all attestation subnet topics when gossiping.
 -b, --bootstrap-node          Specifies one or more bootstrap nodes to use when connecting
                               to the network.
     --bootstrap-file          Specifies a line-delimited file of bootstrap Ethereum network
                               addresses.
     --listen-address          Listening address for the Ethereum LibP2P and Discovery v5
                               traffic [=0.0.0.0].
     --tcp-port                Listening TCP port for Ethereum LibP2P traffic [=9000].
     --udp-port                Listening UDP port for node discovery [=9000].
     --max-peers               The maximum number of peers to connect to [=160].
     --nat                     Specify method to use for determining public address. Must be
                               one of: any, none, upnp, pmp, extip:<IP>.
     --enr-auto-update         Discovery can automatically update its ENR with the IP
                               address and UDP port as seen by other nodes it communicates
                               with. This option allows to enable/disable this
                               functionality.
     --weak-subjectivity-checkpoint  Weak subjectivity checkpoint in the format
                               block_root:epoch_number.
     --finalized-checkpoint-state  SSZ file specifying a recent finalized state.
     --finalized-checkpoint-block  SSZ file specifying a recent finalized block.
     --node-name               A name for this node that will appear in the logs. If you set
                               this to 'auto', a persistent automatically generated ID will
                               be selected for each --data-dir folder.
     --graffiti                The graffiti value that will appear in proposed blocks. You
                               can use a 0x-prefixed hex encoded string to specify raw
                               bytes.
     --verify-finalization     Specify whether to verify finalization occurs on schedule,
                               for testing.
     --stop-at-epoch           A positive epoch selects the epoch at which to stop.
     --metrics                 Enable the metrics server [=false].
     --metrics-address         Listening address of the metrics server [=127.0.0.1].
     --metrics-port            Listening HTTP port of the metrics server [=8008].
     --status-bar              Display a status bar at the bottom of the terminal screen.
     --status-bar-contents     Textual template for the contents of the status bar.
     --rpc                     Enable the JSON-RPC server [=false].
     --rpc-port                HTTP port for the JSON-RPC service [=9190].
     --rpc-address             Listening address of the RPC server [=127.0.0.1].
     --in-process-validators   Disable the push model (the beacon node tells a signing
                               process with the private keys of the validators what to sign
                               and when) and load the validators in the beacon node itself.
     --discv5                  Enable Discovery v5 [=true].
     --dump                    Write SSZ dumps of blocks, attestations and states to data
                               dir [=false].
     --direct-peer             The list of priviledged, secure and known peers to connect
                               and maintain the connection to, this requires a not random
                               netkey-file. In the complete multiaddress format like:
                               /ip4/<address>/tcp/<port>/p2p/<peerId-public-key>. Peering
                               agreements are established out of band and must be
                               reciprocal..
     --doppelganger-detection  Whether to detect whether another validator is be running the
                               same validator keys [=true].

...

For Developers

This page contains tips and tricks for developers, further resources, along with information on how to set up your build environment on your platform.

Before building Nimbus for the first time, make sure to install the prerequisites.

Code style

The code follows the Status Nim Style Guide.

Branch lifecycle

The git repository has 3 main branches, stable, testing and unstable as well as feature and bugfix branches.

Unstable

The unstable branch contains features and bugfixes that are actively being tested and worked on.

  • Features and bugfixes are generally pushed to individual branches, each with their own pull request against the unstable branch.
  • Once the branch has been reviewed and passed CI, the developer or reviewer merges the branch to unstable.
  • The unstable branch is regularly deployed to the Nimbus pyrmont fleet where additional testing happens.

Testing

The testing branch contains features and bugfixes that have gone through CI and initial testing on the unstable branch and are ready to be included in the next release.

  • After testing a bugfix or feature on unstable, the features and fixes that are planned for the next release get merged to the testing branch either by the release manager or team members.
  • The testing branch is regularly deployed to the Nimbus pyrmont fleet as well as a smaller mainnet fleet.
  • The branch should remain release-ready at most times.

Stable

The stable branch tracks the latest released version of Nimbus and is suitable for mainnet staking.

Build system

Windows

mingw32-make # this first invocation will update the Git submodules

You can now follow the instructions in this this book by replacing make with mingw32-make (you should run mingw32 regardless of whether you're running 32-bit or 64-bit architecture):

mingw32-make test # run the test suite

Linux, macOS

After cloning the repo:

# Build nimbus_beacon_node and all the tools, using 4 parallel Make jobs
make -j4

# Run tests
make test

# Update to latest version
git pull
make update

Environment

Nimbus comes with a build environment similar to Python venv - this helps ensure that the correct version of Nim is used and that all dependencies can be found.

./env.sh bash # start a new interactive shell with the right env vars set
which nim
nim --version # Nimbus is tested and supported on 1.0.2 at the moment

# or without starting a new interactive shell:
./env.sh which nim
./env.sh nim --version

# Start Visual Studio code with environment
./env.sh code

Makefile tips and tricks for developers

  • build all those tools known to the Makefile:
# $(nproc) corresponds to the number of cores you have
make -j $(nproc)
  • build a specific tool:
make state_sim
  • you can control the Makefile's verbosity with the V variable (defaults to 0):
make V=1 # verbose
make V=2 test # even more verbose
make LOG_LEVEL=DEBUG bench_bls_sig_agggregation # this is the default
make LOG_LEVEL=TRACE nimbus_beacon_node # log everything
  • pass arbitrary parameters to the Nim compiler:
make NIMFLAGS="-d:release"
  • you can freely combine those variables on the make command line:
make -j$(nproc) NIMFLAGS="-d:release" USE_MULTITAIL=yes eth2_network_simulation
make USE_LIBBACKTRACE=0 # expect the resulting binaries to be 2-3 times slower
  • disable -march=native because you want to run the binary on a different machine than the one you're building it on:
make NIMFLAGS="-d:disableMarchNative" nimbus_beacon_node
  • disable link-time optimisation (LTO):
make NIMFLAGS="-d:disableLTO" nimbus_beacon_node
  • build a static binary
make NIMFLAGS="--passL:-static" nimbus_beacon_node
  • publish a book using mdBook from sources in "docs/" to GitHub pages:
make publish-book
  • create a binary distribution
make dist
  • test the binaries
make dist-test

Multi-client interop scripts

This repository contains a set of scripts used by the client implementation teams to test interop between the clients (in certain simplified scenarios). It mostly helps us find and debug issues.

Stress-testing the client by limiting the CPU power

make pyrmont CPU_LIMIT=20

The limiting is provided by the cpulimit utility, available on Linux and macOS. The specified value is a percentage of a single CPU core. Usually 1 - 100, but can be higher on multi-core CPUs.

Build and run the local beacon chain simulation

The beacon chain simulation runs several beacon nodes on the local machine, attaches several local validators to each, and builds a beacon chain between them.

To run the simulation:

make update
make eth2_network_simulation

To clean the previous run's data:

make clean_eth2_network_simulation_all

To change the number of validators and nodes:

# Clear data files from your last run and start the simulation with a new genesis block:
make VALIDATORS=192 NODES=6 USER_NODES=1 eth2_network_simulation

If you’d like to see the nodes running on separated sub-terminals inside one big window, install Multitail (if you're on a Mac, follow the instructions here), then:

USE_MULTITAIL="yes" make eth2_network_simulation

You’ll get something like this (click for full size):

You can find out more about the beacon node simulation here.

Build and run the local state transition simulation

This simulation is primarily designed for researchers, but we'll cover it briefly here in case you're curious :)

The state transition simulation quickly runs the beacon chain state transition function in isolation and outputs JSON snapshots of the state (directly to the nimbus-eth2 directory). It runs without networking and blocks are processed without slot time delays.

# build the state simulator, then display its help ("-d:release" speeds it
# up substantially, allowing the simulation of longer runs in reasonable time)
make NIMFLAGS="-d:release" state_sim
build/state_sim --help

Use the output of the help command to pass desired values to the simulator - experiment with changing the number of slots, validators, , etc. to get different results.

The most important options are:

  • slots : the number of slots to run the simulation for (default 192)
  • validators: the number of validators (default 6400)
  • attesterRatio: the expected fraction of attesters that actually do their work for every slot (default 0.73)
  • json_interval: how often JSON snapshots of the state are outputted (default every 32 slots -- or once per epoch)

For example, to run the state simulator for 384 slots, with 20,000 validators, and an average of 66% of attesters doing their work every slot, while outputting snapshots of the state twice per epoch, run:

build/state_sim --slots=384 --validators=20000 --attesterRatio=0.66 --json_interval=16

Frequently Asked Questions

Nimbus

Why are metrics not working?

The metrics server is disabled by default, enable it by passing --metrics to the run command:

./run-mainnet-beacon-node.sh --metrics ...

Validating

What exactly is a validator?

A validator is an entity that participates in the consensus of the Ethereum 2.0 protocol.

Or in plain english, a human running a computer process. This process proposes and vouches for new blocks to be added to the blockchain.

In other words, you can think of a validator as a voter for new blocks. The more votes a block gets, the more likely it is to be added to the chain.

Importantly, a validator's vote is weighted by the amount it has at stake.

What is the deposit contract?

You can think of it as a transfer of funds between Ethereum 1.0 accounts and Ethereum 2.0 validators.

It specifies who is staking, who is validating, how much is being staked, and who can withdraw the funds.

Why do validators need to have funds at stake?

Validators need to have funds at stake so they can be penalized for behaving dishonestly.

In other words, to keep them honest, their actions need to have financial consequences.

How much ETH does a validator need to stake?

Before a validator can start to secure the network, he or she needs to stake 32 ETH. This forms the validator's initial balance.

Is there any advantage to having more than 32 ETH at stake?

No. There is no advantage to having more than 32 ETH staked.

Limiting the maximum stake to 32 ETH encourages decentralization of power as it prevents any single validator from having an excessively large vote on the state of the chain.

Remember that a validator’s vote is weighted by the amount it has at stake.

Can I stop my validator for a few days and then start it back up again?

Yes but, under normal conditions, you will lose an amount of ETH roughly equivalent to the amount of ETH you would have gained in that period. In other words, if you stood to earn ≈0.01 ETH, you would instead be penalised ≈0.01 ETH.

I want to switch my validator keys to another machine, how long do I need to wait to avoid getting slashed?

We recommend waiting 2 epochs (around 15 minutes), before restarting Nimbus on a different machine.

When should I top up my validator's balance?

The answer to this question very much depends on how much ETH you have at your disposal.

You should certainly top up if your balance is close to 16 ETH: this is to ensure you don't get removed from the validator set (which automatically happens if your balance falls below 16 ETH).

At the other end of the spectrum, if your balance is closer to 31 ETH, it's probably not worth your while adding the extra ETH required to get back to 32.

When can I withdraw my funds, and what's the difference between exiting and withdrawing?

You can signal your intent to stop validating by signing a voluntary exit message with your validator.

However, bear in mind that in Phase 0, once you've exited, there's no going back.

There's no way for you to activate your validator again, and you won't be able to transfer or withdraw your funds until at least Phase 1.5 (which means your funds will remain inaccessible until then).

How are validators incentivized to stay active and honest?

In addition to being penalized for being offline, validators are penalized for behaving maliciously – for example attesting to invalid or contradicting blocks.

On the other hand, they are rewarded for proposing / attesting to blocks that are included in the chain.

The key concept is the following:

  • Rewards are given for actions that help the network reach consensus
  • Minor penalties are given for inadvertant actions (or inactions) that hinder consensus
  • And major penalities -- or slashings -- are given for malicious actions

In other words, validators that maximize their rewards also provide the greatest benefit to the network as a whole.

How are rewards/penalties issued?

Remember that each validator has its own balance -- with the initial balance outlined in the deposit contract.

This balance is updated periodically by the Ethereum network rules as the validator carries (or fails to carry) out his or her responsibilities.

Put another way, rewards and penalties are reflected in the validator's balance over time.

How often are rewards/penalties issued?

Approximately every six and a half minutes -- a period of time known as an epoch.

Every epoch, the network measures the actions of each validator and issues rewards or penalties appropriately.

How large are the rewards/penalties?

There is no easy answer to this question as there are many factors that go into this calculation.

Arguably the most impactful factor on rewards earned for validating transactions is the total amount of stake in the network. In other words, the total amount of validators. Depending on this figure the max annual return rate for a validator can be anywhere between 2 and 20%.

Given a fixed total number of validators, the rewards/penalties predominantly scale with the balance of the validator -- attesting with a higher balance results in larger rewards/penalties whereas attesting with a lower balance results in lower rewards/penalties.

Note however that this scaling mechanism works in a non-obvious way. To understand the precise details of how it works requires understanding a concept called effective balance. If you're not yet familiar with this concept, we recommend you read through this excellent post.

Why do rewards depend on the total number of validators in the network?

Block rewards are calculated using a sliding scale based on the total amount of ETH staked on the network.

In plain english: if the total amount of ETH staked is low, the reward (interest rate) is high, but as the total stake rises, the reward (interest) paid out to each validator starts to fall.

Why a sliding scale? While we won't get into the gory details here, the basic intution is that there needs to be a minimum number of validators (and hence a minimum amount of ETH staked) for the network to function properly. So, to incentivize more validators to join, it's important that the interest rate remains high until this minimum number is reached.

Afterwards, validators are still encouraged to join (the more validators the more decentralized the network), but it's not absolutely essential that they do so (so the interest rate can fall).

How badly will a validator be penalized for being offline?

It depends. In addition to the impact of effective balance there are two important scenarios to be aware of:

  1. Being offline while a supermajority (2/3) of validators is still online leads to relatively small penalties as there are still enough validators online for the chain to finalize. This is the expected scenario.

  2. Being offline at the same time as more than 1/3 of the total number of validators leads to harsher penalties, since blocks do not finalize anymore. This scenario is very extreme and unlikely to happen.

Note that in the second (unlikely) scenario, validators stand to progressively lose up to 50% (16 ETH) of their stake over 21 days. After 21 days they are ejected out of the validator pool. This ensures that blocks start finalizing again at some point.

How great does an honest validator's uptime need to be for it to be net profitable?

Overall, validators are expected to be net profitable as long as their uptime is greater than 50%.

This means that validators need not go to extreme lengths with backup clients or redundant internet connections as the repercussions of being offline are not so severe.

How much will a validator be penalized for acting maliciously?

Again, it depends. Behaving maliciously – for example attesting to invalid or contradicting blocks, will lead to a validator's stake being slashed.

The minimum amount that can be slashed is 1 ETH, but this number increases if other validators are slashed at the same time.

The idea behind this is to minimize the losses from honest mistakes, but strongly disincentivize coordinated attacks.

What exactly is slashing?

Slashing has two purposes: (1) to make it prohibitively expensive to attack eth2, and (2) to stop validators from being lazy by checking that they actually perform their duties. Slashing a validator is to destroy (a portion of) the validator’s stake if they act in a provably destructive manner.

Validators that are slashed are prevented from participating in the protocol further and are forcibly exited.

What happens I lose my signing key?

If the signing key is lost, the validator can no longer propose or attest.

Over time, the validator's balance will decrease as he or she is punished for not participating in the consensus process. When the validator's balance reaches 16 Eth, he or she will be automatically exited from the validator pool.

However, all is not lost. Assuming validators derive their keys using EIP2334 (as per the default onboarding flow)then validators can always recalculate their signing key from their withdrawal key.

The 16 Eth can then be withdrawn -- with the withdrawal key -- after a delay of around a day.

Note that this delay can be longer if many others are exiting or being kicked out at the same time.

What happens if I lose my withdrawal key?

If the withdrawal key is lost, there is no way to obtain access to the funds held by the validator.

As such, it's a good idea to create your keys from mnemonics which act as another backup. This will be the default for validators who join via this site's onboarding process.

What happens if my withdrawal key is stolen?

If the withdrawal key is stolen, the thief can transfer the validator’s balance, but only once the validator has exited.

If the signing key is not under the thief’s control, the thief cannot exit the validator.

The user with the signing key could attempt to quickly exit the validator and then transfer the funds -- with the withdrawal key -- before the thief.

Why two keys instead of one?

In a nutshell, security. The signing key must be available at all times. As such, it will need to be held online. Since anything online is vulnerable to being hacked, it's not a good idea to use the same key for withdrawals.

Contribute

Follow these steps to contribute to this book!

We use an utility tool called mdBook to create online books from Markdown files.

Before You Start

  1. Install mdBook from here.
  2. Clone the repository by git clone https://github.com/status-im/nimbus-eth2.git.
  3. Go to where the Markdown files are located by cd docs.

Real-Time Update and Preview Changes

  1. Run mdbook serve in the terminal.
  2. Preview the book at http://localhost:3000.

Build and Deploy

The first step is to submit a pull request to the unstable branch. Then, after it is merged, do the following under our main repository:

  1. cd nimbus-eth2
  2. git checkout unstable
  3. git pull
  4. make update (This is to update the submodules to the latest version)
  5. make publish-book

Troubleshooting

If you see file conflicts in the pull request, this may due to that you have created your new branch from an old version of the unstable branch. Update your new branch using the following commands:

git checkout unstable
git pull
make update
git checkout readme
git merge unstable
# use something like "git mergetool" to resolve conflicts, then read the instructions for completing the merge (usually just a `git commit`)
# check the output of "git diff unstable"

Thank you so much for your help to the decentralized and open source community. :)

Resources

Binary distribution internals

Reproducibility

The binaries we build in GitHub Actions and distribute in our releases come from an intricate process meant to ensure reproducibility.

While the ability to produce the same exact binaries from the corresponding Git commits is a good idea for any open source project, it is a requirement for software that deals with digital tokens of significant value.

Docker containers for internal use

The easiest way to guarantee that users are able to replicate our binaries for themselves is to give them the same software environment we used in CI. Docker containers fit the bill, so everything starts with the architecture- and OS-specific containers in docker/dist/base\_image/.

These images contain all the packages we need, are built and published once (to Docker Hub), and are then reused as the basis for temporary Docker images where the nimbus-eth2 build is carried out.

These temporary images are controlled by Dockerfiles in docker/dist/. Since we're not publishing them anywhere, we can customize them to the system they run on (we ensure they use the host's UID/GID, the host's QEMU static binaries, etc); they get access to the source code through the use of external volumes.

Build process

It all starts from the GitHub actions in .github/workflows/release.yml. There is a different job for each supported OS-architecture combination and they all run in parallel (ideally).

The build-amd64 CI job is special, because it creates a new GitHub release draft, as soon as possible. All the other jobs will upload their binary distribution tarballs to this draft release, but, since it's not feasible to communicate between CI jobs, they simply use GitHub APIs to find out what the latest release is, check that it has the right Git tag, and use that as their last step.

The build itself is triggered by a Make target: make dist-amd64. This invokes scripts/make\_dist.sh which builds the corresponding Docker container from docker/dist/ and runs it with the Git repository's top directory as an external volume.

The entry point for that container is docker/dist/entry\_point.sh and that's where you'll find the Make invocations needed to finally build the software and create distributable tarballs.

Docker images for end users

Configured in .github/workflows/release.yml (exclusively for the build-amd64 job): we unpack the distribution tarball and copy its content into a third type of Docker image - this one meant for end users and defined by docker/dist/binaries/Dockerfile.amd64.

We then publish that to Docker Hub.