Execution & Consensus Client Bootnodes

It is a baremetal dedicated machine running in northeast NA.It is running on unmetered network, hosted by a reliable business. It has more than enough resources to run a full Ethereum node. Iā€™m happy to share more details in private.

2 Likes

The consensus client teams are all running consensus bootnodes for discv5, you can find the list of ENRs here: lighthouse/boot_enr.yaml at a53830fd60a119bf3f659b253360af8027128e83 Ā· sigp/lighthouse Ā· GitHub

You can extract the IPs using enr-cli read <enr-string>.

The Lighthouse (Sigma Prime) bootnodes are currently hosted on Linode in Australia, so weā€™re contributing a little bit of hosting + geographic diversity. I imagine hosting on bare metal servers operated entirely by our team would be infeasible, but we could think about it. Running an execution bootnode may also be an option for us, if that would be deemed useful.

5 Likes

@michaelsproul thanks a lot for the insights. Would it be possible to add the specified location in the boot_enr.yaml file to each consensus bootnode? That would already increase transparency a lot.

At first glance, Linode Australia seems like a good solution, but after I did a little research, Linode was taken over by Akamai Technologies Inc. which in turn runs under US law. So the advantage of geographical diversity is kind of gone.

I think this makes perfect sense. Tbh, I would prefer the following solution: each EL/CL client runs at least each 2 EL and CL bootnodes. 1 of each of the 2 can (but does not have to) be cloud-based, and the other EL and CL bootnodes must each be on bare metal (preferably outside the US; e.g. Switzerland or Sweden might be good countries). So we have the following distribution:

  • 4 EL clients run 2 EL and 2 CL bootnodes each, 50% of which should run on bare metal outside the US. So a total of 4 \times 2 \times 2 = 16 bootnodes (at least, of which 8 are on bare metal).
  • The same logic applies to the CL clients: 5 \times 2 \times 2 = 20 bootnodes (at least, of which 10 are on bare metal).
  • All additional bootnodes (whether EL or CL helps, of course).
  • In addition, there will be community-based bootnodes (EL & CL) (like EthStaker or Lido?) that will be carefully vetted and will enrich this list. If anyone has a contact at Lido, I would appreciate it if this thread could be forwarded.

Happy to hear any feedback or better ideas.

@holiman what are the thoughts from the Geth team on the above suggestion?

I think I already voiced my opinion (not speaking on behalf of the geth-team, just myself):

In general though: If EF-controlled bootnodes are seens as ā€˜critical infrastructureā€™ then we should remove them, because the network needs to get by without central points of failure.

1 Like

UPDATE (21 April 2023)

Since I cannot edit my older posts, I will add a new comment here that now entails the full list of execution and consensus bootnodes including links.

Overview Execution Clients

Go-Ethereum

Nethermind

  • Mainnet bootnodes: nethermind/foundation.json.
  • 34 bootnodes running. 4 of the 32 bootnodes are the Geth bootnodes running on AWS (2 out 4) and Hetzner (2 out 4).
  • For the remaining 28 bootnodes, I still couldnā€™t find the hosting locations. However, they use the same bootnodes as in the original Parity client: trinity/constants.py. Nonetheless, all without information on where hosted.
  • In this commit Remove deprecated EF bootnodes (#5408) the 4 Azure bootnodes got removed.

Erigon

Besu

  • Mainnet bootnodes: besu/mainnet.json.
  • 14 Bootnodes running. 4 of the 10 bootnodes are the Geth bootnodes running on AWS (2 out 4) and Hetzner (2 out 4). Additionally, 5 legacy Geth and 1 C++ bootnodes are listed. Nonetheless, all without information on where hosted.
  • In this commit Remove deprecated EF bootnodes (#5194) the 4 Azure bootnodes got removed.

Overview Consensus Clients

Lighthouse

  • Mainnet bootnodes: lighthouse/boot_enr.yaml.
  • 13 bootnodes running. The 2 Lighthouse (Sigma Prime) bootnodes are currently hosted on Linode in Australia (information via this comment). Additionally, 4 EF, 2 Teku, 3 Prysm and 2 Nimbus bootnodes are listed. Nonetheless, all without information on where hosted.

Lodestar

  • Mainnet bootnodes: lodestar/mainnet.ts.
  • 13 bootnodes running. The 2 Lighthouse (Sigma Prime) bootnodes are currently hosted on Linode in Australia (information via this comment). Additionally, 4 EF, 2 Teku, 3 Prysm and 2 Nimbus bootnodes are listed. Nonetheless, all without information on where hosted.

Nimbus

  • Mainnet bootnodes (pulled via submodule): eth2-networks/bootstrap_nodes.txt.
  • 13 bootnodes running. The 2 Lighthouse (Sigma Prime) bootnodes are currently hosted on Linode in Australia (information via this comment). Additionally, 4 EF, 2 Teku, 3 Prysm and 2 Nimbus bootnodes are listed. Nonetheless, all without information on where hosted.

Prysm

  • Mainnet bootnodes: prysm/mainnet_config.go.
  • 13 bootnodes running. The 2 Lighthouse (Sigma Prime) bootnodes are currently hosted on Linode in Australia (information via this comment). Additionally, 4 EF, 2 Teku, 3 Prysm and 2 Nimbus bootnodes are listed. Nonetheless, all without information on where hosted.

Teku

  • Mainnet bootnodes: teku/Eth2NetworkConfiguration.java.
  • 13 bootnodes running. The 2 Lighthouse (Sigma Prime) bootnodes are currently hosted on Linode in Australia (information via this comment). Additionally, 4 EF, 2 Teku, 3 Prysm and 2 Nimbus bootnodes are listed. Nonetheless, all without information on where hosted.
1 Like

For the sake of transparency, I made a post in the Lido forum here in order to get them involved into this discussion as well. I would like to thank @remyroy for making the connection to Lido.

2 Likes

We probably donā€™t need anyone to volunteer this information. The cloud provider for an IP can be fairly easily determined by a reverse DNS lookup like: dig -x A.B.C.D. The IPs can be fetched from the ENRs using enr-cli read as I previously mentioned. Maybe you could whip up a script to do it @pcaversaccio?

1 Like

@michaelsproul done :slight_smile: - the below summary (on an ENR basis) can be found also here. Seems AWS (mostly US) is currently ensuring that no liveness failure is happening on Ethereum ;). Iā€™m pretty sure we can all do a better job here when it comes to greater geographic and provider diversity.

IPs and Locations

Teku teamā€™s bootnodes

  • 3.19.194.157 | aws-us-east-2-ohio
  • 3.19.194.157 | aws-us-east-2-ohio

Prylab teamā€™s bootnodes

  • 18.223.219.100 | aws-us-east-2-ohio
  • 18.223.219.100 | aws-us-east-2-ohio
  • 18.223.219.100 | aws-us-east-2-ohio

Lighthouse teamā€™s bootnodes

  • 172.105.173.25 | linode-au-sidney
  • 139.162.196.49 | linode-uk-london

EF bootnodes

  • 3.17.30.69 | aws-us-east-2-ohio
  • 18.216.248.220 | aws-us-east-2-ohio
  • 54.178.44.198 | aws-ap-northeast-1-tokyo
  • 54.65.172.253 | aws-ap-northeast-1-tokyo

Nimbus teamā€™s bootnodes

  • 3.120.104.18 | aws-eu-central-1-frankfurt
  • 3.64.117.223 | aws-eu-central-1-frankfurt

Edit: I have opened a PR that adds this information to the eth2-networks repo.

FWIW, Prysm does for example not, they would discover a new set of peers again on a restart. See the comment here.

I donā€™t have an account over there so Iā€™ll reply here:

On the point brought up in this discussion, in the event of a coordinated censoring event across regions ( where all bootnodes are taken down) a straightforward solution would be to simply share a new list of enrs for nodes to boot from.

I think the attack vector of interest here isnā€™t that the bootnodes are offline/unavailable, but they are controlled and give the attacker the ability to partition the network. It certainly seems like the right thing for Prysm to do is retain its peer list from previous session and then use it just as a sort of ā€œupdated list of bootnodesā€ and then discover a new set of peers from there (which, on restart, would yield a new set of bootnodes).

IMO, the hard coded bootnodes should only be used on first run to establish an initial connection to the network, but the bootnode list from that point on should be dynamic based on prior runs. This makes it much harder to meaningfully capture the network.


For a concrete example attack, imagine someone has two 0-days in their pocket (neither of these are particularly far fetched for a state actor):

  1. They have the ability to takeover/control bootnodes.
  2. They have the ability to crash Prysm clients on the network (causing them all to restart).

This attacker now has the ability to eclipse all Prysm nodes. If our client diversity numbers are high enough this isnā€™t too big of a problem, but if they arenā€™t then this could lead to a fork should the attacker desire. Keep in mind, once you have successfully eclipsed a node, you need not reveal this immediately. You can sit on your eclipse and not leverage it until an opportunity presents itself (like you gain the ability to eclipse Teku as well for example, then you attack once you have 66% of stake eclipsed).

2 Likes

Great initiative. The Lodestar team at ChainSafe can look into seeing where we can be most useful and try to run both a consensus and execution bootnode. Though currently, a lot of our infrastructure is unfortunately cloud based with larger cloud based companies. For something like this Iā€™d seek to explore what options we have for bare metal in smaller datacenters where itā€™s still reliable, but not exposed to the same geographical and/or large entity risks that we currently have here. Any leads, ideas and connections are appreciated!

4 Likes

@philknows very happy to hear this. I think @gsalberto can probably assist here and might also refer further similar local providers (as it seems latitude.sh has only one location (London) in Europe as I can see) (posting his answer from above here):

Hey Guys,

@randomishwalk sent this thread to me over twitter and I am jumping here as I can potentially help with distributed global infrastructure

latitude.sh, bare metal company I operate, run in 15 locations (9 of them being our of the US) - Global regions to deploy dedicated servers and custom projects - Latitude.sh

Happy to chat more

2 Likes

Some options re:bare metalā€“certainly not an exhaustive list (excluded Equinix given itā€™s a US co)

1 Like

We are launching Frankfurt this week as our second location in the Europe region!

2 Likes

Feel free to check us out and our locations, Global regions to deploy dedicated servers and custom projects - Latitude.sh

Lots of options in South America that can address your geographical exposure concern

If you are looking for a particular region, let me know, and I can probably give you some guidance

2 Likes

Does anyone know who are the maintainers of the eth2-networks repo? I want to get my PR merged that adds the IP and location information of the mainnet bootnodes: Adding IP and location information to mainnet bootnodes.

As of today, the IP and location information of all consensus mainnet bootnodes are available here as my PR got finally merged.

1 Like

I am trying to run a fork of ETH using the same architecture. How do I make P2P connections? It seems I canā€™t connect two beacon nodes together without one or both of them having issues.

How does the P2P connection work in this current architecture?

ethereum/devp2p: Ethereum peer-to-peer networking specifications (github.com)

Iā€™d like to see dual-stack v4/v6 bootnodes on both CL and EL. IPv6 is useful to solo stakers in the global south (as well increasingly in EMEA and AMERS), because CGNAT keeps them from opening peering ports: They rely on the ā€œcharity of strangersā€ to detect peers.

Current v6 testing as well as user onboarding is hindered by there not being bootnodes. Lighthouse and Lodestar support v6, as well as Besu afaik. Having at least a few dual-stack nodes would allow ethstaker and other orgs to start seriously testing client v6 support and assist users.

Living document for v6 client support: https://eth-docker.net/Support/ipv6#which-clients

1 Like