Native DVT for Ethereum staking

Distributed validator technology (DVT) is a way for Ethereum stakers to stake without fully relying on one single node. Instead, the key is secret-shared across a few nodes, and all signatures are threshold signed. The node is guaranteed to work correctly (and not get slashed or inactivity-leaked) as long as > 2/3 of nodes are honest.

DVT includes solutions like ssv.network and Obol, as well as what I call “DVT-lite”: either the Dirk + Vouch combination or Vero. These solutions do not do full-on consensus inside each validator, so they offer slightly worse guarantees, but they are quite a bit simpler. Many organizations today are exploring using DVT to stake their coins.

However, these solutions are quite complex. They have a complicated setup procedure, require networking channels between the nodes, etc. Additionally, they depend on the linearity property of BLS, which is exactly the property that makes it not quantum-secure.

In this post, I propose a surprisingly simple alternative: we enshrine DVT into the protocol.

The design

If a validator has >= n times the minimum balance, they are allowed to specify up to n keys and a threshold m, with a maximum of m <= n <= 16. This creates n “virtual identities” that all follow the protocol fully independently, but are always assigned to roles (proposer, committee, p2p subnet) together.

That is, if there are a total of 100000 validators and you have a size-n validator with multiple virtual entitities, and there is a role with t participants (eg. t=1 for proposal, t=16 for FOCIL, t=n/64 for some p2p subsystem that shards nodes into 64 subnets, there is a t/100000 chance that all of your virtual identities will be assigned to that role.

From the perspective of protocol accounting, these virtual identities are grouped into a single “group identity”. That single object is treated as taking some action (eg. making a block, signing) if and only if at least m or the n virtual identities signed off on the action. Based on this, rewards and penalties are assigned.

Hence, if you have an identity with eg. m = 5, n = 7, then if five signatures all attest to a block, you get 100% of the attester reward and your participation is counted, but if four signatures do, you get 0% of the reward and your participation is not counted. Similarly, to slash such a validator, you need to show proof that >= 5 of the nodes votes for A, and >= 5 of the nodes voted for B.

Note that this means that if m <= n/2, slashing is possible without any malfeasance, so such a setting is strongly anti-recommended, and should only be considered in situations where some nodes are normally-offline backups.

Properties

This design is extremely simple from the perspective of a user. DVT staking becomes simply running n copies of a standard client node. The only implementation complexity is block production (or FOCIL production): realistically, a random node would need to be promoted as a primary, and the other nodes sign off on it.

This only adds one round of latency on block and FOCIL production, and no latency on attestation.

This design is easy to adapt to any signature scheme, it does not depend on any arithmetic properties.

This design is intended to have two desirable effects:

  1. Help security-conscious stakers with medium to high amounts of ETH (both individual whales and institutions) stake in a more secure M-of-N setup, instead of relying on a single node (this also makes it trivial to get more gains in client diversity)
  2. Help such stakers stake on their own instead of parking their coins with staking providers, significantly increasing the measurable decentralization (eg. Herfindahl index, Nakamoto coefficient) of the Ethereum staking distribution.

It also simplifies participation in existing decentralized staking protocols, reducing the client load and devops experience to something equivalent to the most basic form of solo staking, allowing such protocols to become more decentralized and more diverse in their participation.

6 Likes

@vbuterin thank you for the post

I get a few questions:

  1. Coordination vs. Passive Broadcasting

For attestations, nodes can stay passive; if they see the same head, the threshold is met naturally. However, for block production, how do you prevent nodes from signing different payloads and failing to reach the m threshold? Would you favor a simple leader rotation or a local gossip sub-net?

2. Async BFT & Multiple Proposers

We could allow a “race” where multiple virtual identities broadcast proposals simultaneously. The first to collect m signatures wins. This eliminates round-change latency, though it slightly increases p2p overhead.

3. Key Rotation

I’d suggest adding a protocol-level key rotation. An m-of-n signed message could swap a compromised key without a full exit/restake, making this much more viable for institutions.

An interesting proposal, though it seems to go against the ongoing efforts of:

  • reducing consensus overhead (e.g. via validator consolidations) - enshrined virtual DVT identities would cause additional network overhead which is currently contained “in-cluster”. With 2048 staked ETH, n <= 16 and a validator at MAX_EB, this would still be better than 64 x 32ETH validators but it feels like taking a big step back. Instead of consolidating the overhead of 64 validators into one, we’d be “consolidating” into 5, 7 or even 16 - a significant increase from the ideal status quo.
  • reducing protocol complexity

With the additional network overhead in mind, maybe this would need to be coupled with something like this in-place - LMD GHOST with ~256 validators and a fast-following finality gadget - #6 by vbuterin (I’m a big fan of that general idea), or perhaps a further MAX_EB increase?


I’m doubtful this proposal will have some of the intended desirable effects though.

This is mostly my skepticism speaking but gains in client diversity have historically been extremely hard to achieve. Even with DVT options available today, we are frequently seeing operators run DVT clusters powered by only 2 different client pairs, protecting validators from downtime while completely failing to protect from the much more dangerous threat – consensus bugs.

I don’t believe requiring such stakers to run multiple machines with different client pairs will make this option very attractive compared to what is available today. Such entities are probably quite capable of running out-of-protocol DVT options, Vouch+Dirk or a couple of Vero instances.

I agree something needs to be done about Ethereum’s stake distribution and its centralizing trend. But I don’t think enshrining DVT will help much at all. I’m currently working on an idea that could help a little on this front but I’m 100% sure we will need more ideas to revert the existing trend.

This I can see happening, and it would be good for decentralized staking protocols, and by extension Ethereum.

Another point is, should the security considerations of a staking actor increase overhead on the CL? Isn’t it much better to offset it to an optimized layer?

It is true that SSV only has 2 clients at the moment (others have 1) but that an evolution/ iteration thing. Also, it might be better to have the EF “split the bill” so we could have more clients

I wasn’t referring to SSV having 2 clients.

What I meant is that there are node operators that run Vero/Vouch/DVT but then only connect their validator clients to Prysm+Geth and Lighthouse+Nethermind nodes (clients used by large parts of the network), resulting in them losing the (imo) most important benefit of multi-node setups – protecting from consensus bugs.

A few considerations from our end, mostly in-line with whats flagged before.

My primary concern is how this contends against other ongoing research trying to lower the amount of signatures on the chain to facilitate the move to ZK and 3SF, e.g. your proposals to get to 8192 signatures per slot, and efforts like EIP7251 (Max EB), as well as flagging that taking DVs in protocol doesn’t mean we no longer have extra communication to make them viable.

I also think the problem to be solved here could be more clearly specified. If the goal is to come up with a solution for distributed validators in post quantum (lean) ethereum, I outline Obol’s research on the topic to date below.

I don’t think it is that simple. Take an attestation’s head vote component (or a sync message); If you have e.g. 7 honest operators taking part in a DV, three might not see a proposers block in time, and attest to a missed slot, while 4 (or 3 or 2) might see the block, and propose it instead. This will result in no majority for the attestation (or at least an incorrect head vote if the protocol can introspect the constituent parts of an attestation and sees quorum votes for correct source and targets) and lost rewards. For a sync message, it would be an outright penalty. These distributed validators have to coordinate to reach a super majority, and now we’d either be doing it on the main p2p network (a bad idea), or doing it in a dedicated p2p channel between the nodes like the status quo.

I find this property of the proposal particularly exclusionary for the long tail of stakers and worth commenting on. Restricting distributed validators to those of significant means does not seem like a reasonable choice. Particularly due to the lack of progress on designs like Orbit which intended to lower the minimum participation, (or to at least hold the level at its current eth denominated terms while lowering the amount of signatures per slot). If DVs are to be taken in-protocol, their marginal extra costs should be subsidised by the protocol imho. The current minimum stake to participate in validating Ethereum is north of $100k, which is almost double the OECD annual salary, and more than 5 times the global average salary. Technology like Distributed Validators, and decentralised liquid staking protocols, lower the minimum participation to $10k and below through squad staking. This is worth keeping in my eyes, at least without an alternative route to more modest barriers to participation.

This I think is the most important problem to solve as it pertains to post-quantum distributed validators. (Whereas I think the chain’s declining nakamoto co-efficient is the most important problem to solve for Ethereum’s validator set. DV enshrinement is not likely to be a big fix for that). All of the aforementioned DV(-adjacent) designs rely on BLS signature aggregation.

At Obol, we have been working on Distributed Validators for Lean Ethereum for over a year now, I’ll briefly describe some avenues that we don’t think will work, then will focus on our most promising avenue to date, which is more or less in line with the design in this post anyways in my opinion.

To conclude; I don’t think enshrinement pre-lean Ethereum has a strong enough need. Post-lean I would support an approach that keeps DVs viable yet out of the core protocol complexity if we could, but we don’t yet have such a design. So including them in-protocol is our best option. I certainly think distributed validators are critical for Ethereum to survive its struggles with (mostly unavoidable) centralisation forces, and can’t be dropped from the staking model outright without all but assuring that the chain will be co-opted by a small number of parties in the not so distant future. Small groups are more credibly neutral than individual parties, and are more likely to make the best decision for the wider community beyond their group.


  1. Alexandre Adomnicăi, Towards Practical Multi-Party Hash Chains using Arithmetization-Oriented Primitives: With Applications to Threshold Hash-Based Signatures. IACR Communications in Cryptology, vol. 2, no. 4, Jan 08, 2026, doi: 10.62056/ahp2tx4e-. ↩︎

4 Likes

Yeah I agree with this, I think it’s a good idea independent of this one. It should not be hard to allow instant key changes, and keep the old key around for a while for slashability.

go against the ongoing efforts of: reducing consensus overhead (e.g. via validator consolidations)

This is fair, though it’s designed to not make the worst case worse. That is the reason behind the “you must have >= 32 * n" rule.

Such entities are probably quite capable of running out-of-protocol DVT options, Vouch+Dirk or a couple of Vero instances.

I think this is the crux of the matter. I’ve personally seen the inside of organizations and ETH whales (incl myself) trying to figure out Vouch/Dirk/Vero, and it’s a big headache to wrap our heads around and understand. If running DVT were as simple as “run 7 independent nodes, the only change is a one-line difference in config file” (maybe even the line of change in the config is not required, as it could autodetect your key’s membership in a DVT set), then I’m pretty sure both others and myself would have been DVT-staking for a while now.

So I feel confident that this style of in-protocol DVT would be pretty decisive in terms of enabling people (esp. whales and smaller institutions) to stake on their own

I don’t think it is that simple. Take an attestation’s head vote component (or a sync message); If you have e.g. 7 honest operators taking part in a DV, three might not see a proposers block in time, and attest to a missed slot, while 4 (or 3 or 2) might see the block, and propose it instead.

yeah I agree this is a weakness. Though I guess (i) I expect it to be rare and not a large penalty to revenue, because most attestations do come on time [and we can further penalize edge case attestations by adding penalties in proportion to how much people behave differently wrt an attestation], and (ii) we can treat each action separately from a rewards perspective, eg. if you break it down to (i) “voted for X” vs “voted for nothing”, (ii) voted for parent Y, then you can give such a voter a reward for voting for parent Y without a reward for voting for X or nothing.

Most recently, we have extended the LeanMultisig repo to support threshold XMSS signatures. leanMultisig/docs/threshold_xmss_design.md at feature/threshold_xmss · ObolNetwork/leanMultisig · GitHub

I appreciate the research, thank you!

I do agree that it has fewer edge cases to go through a single leader and have a single action; though the thing to trade it off against is devops ccomplexity.

It’s possible that the right thing to do is to figure out a way to do it natively but in a way that still avoids tracking partial participation onchain (eg. leader sends their signature, the other nodes see it in the p2p net and follow the leader). In that case, the tradeoff would become just latency (though maybe it’s not that bad)