Execution & Consensus Client Bootnodes

I don’t have an account over there so I’ll reply here:

On the point brought up in this discussion, in the event of a coordinated censoring event across regions ( where all bootnodes are taken down) a straightforward solution would be to simply share a new list of enrs for nodes to boot from.

I think the attack vector of interest here isn’t that the bootnodes are offline/unavailable, but they are controlled and give the attacker the ability to partition the network. It certainly seems like the right thing for Prysm to do is retain its peer list from previous session and then use it just as a sort of “updated list of bootnodes” and then discover a new set of peers from there (which, on restart, would yield a new set of bootnodes).

IMO, the hard coded bootnodes should only be used on first run to establish an initial connection to the network, but the bootnode list from that point on should be dynamic based on prior runs. This makes it much harder to meaningfully capture the network.


For a concrete example attack, imagine someone has two 0-days in their pocket (neither of these are particularly far fetched for a state actor):

  1. They have the ability to takeover/control bootnodes.
  2. They have the ability to crash Prysm clients on the network (causing them all to restart).

This attacker now has the ability to eclipse all Prysm nodes. If our client diversity numbers are high enough this isn’t too big of a problem, but if they aren’t then this could lead to a fork should the attacker desire. Keep in mind, once you have successfully eclipsed a node, you need not reveal this immediately. You can sit on your eclipse and not leverage it until an opportunity presents itself (like you gain the ability to eclipse Teku as well for example, then you attack once you have 66% of stake eclipsed).

2 Likes