Analysis on ''Correlated Attestation Penalties''

That only works in a very specific scenario (e.g. if validators are taken offline abruptly for “legal” reasons). In “normal” conditions, a CEX that knows that this is how correlation penalties will work will just have a very fault tolerant setup (using multiple locations and diversified software). This fault tolerance does not equal diversification of the node operator set, only distribution of infrastructure (soft and hard), which is IMO is meaningfully different. Knoshua is on point here that once you create an objective out of a metric, you risk creating perverse incentives that actually have unintended effects.

It’s much easier for 1 single entity to have a fault-tolerant setup at scale than it is for 37 (or hundreds or thousands of) entities to do so, so what you’re doing is encouraging distribution of infrastructure but coalescence of operating entities.

We should further consider what this kind of incentive structure causes from an infra perspective. For example: although it’s more expensive, it’s much easier to achieve fault-tolerant infrastructure via cloud than it is via baremetal (especially if we’re talking about on-premises setups) because of how much easier it is to orchestrate and manage node and validator setups, but as a network we obviously want to encourage use of local datacenters not only from a p2p perspective (better to have distributed nodes across countries vs clumped up together in big connectivity hotspots) but also from a resilience perspective.

I’m not saying that they are by nature, I’m saying that if you take your model and look at the results, the decentralized solutions are penalized pound for pound more than the very centralized (entity / ops-wise) solutions are.

You’ll never reach this long-term because in the short-term you’ve incentivized everyone to run active/active setups in already densely saturated regions, or risk suffering correlation penalties. You propose a potential scenario where this isn’t the case, but based on the historic data that you’ve procured and modeled against, it’s clearly the opposite.

We can definitely drill this down into operator-by-operator view and see which geos these operators are running from to analyze this further.

As explained above, the cost to do this for a centralized entity is much cheaper than to effectively do the same via a decentralized solution across multiple parties.

They already have a “very” fault tolerant setup. The fault tolerance probably scales lineary with the penalty but correlation penalties would increase that exponentially. I do always assume that every entity is already performing at its best, appropriately to the size of the penalty.
Then, the only way to prevent hits from anti-correlation penalties is to do something “andi-corrlation”, which is distribution. In the worst case, if this leads to using AWS and Google instead of only AWS, then the goal is already achieved. Of course, the true outcome is hard to predict as of today.

Also low-participating geo locations are better off simply because if there is an outage and all validators of a small geo loc go offline, it’s still not enough to hit the threshold. If there is an outage in the US, on the other hand, then a majority of validators are hit, according to the goal of anti-correlation panlties.

Anti-correlation penalties with a threshold that is only ever hit by large entities with highly correlated setups are better for minority clients, minority geo locations, minority hardware, minority ISP, minority cloud providers etc. and those affected (in the long term) aren’t the small guys. Of course there would be exceptional cases where, e.g., my small solo validator is offline for a few epochs and some random 10% pool misses some attestations at the same time, but this is not the usual case.

I don’t see where it’s the opposite. Solo Stakers or Rocketpool NOs are better off. I wouldn’t differentiate between Staking Pools and CEXs validators in this context (although we agree the the former is better for the ecosystem). Both, CEXs and Staking Pool NOs are sophisticated parties (e.g. for Lido, for Rocketpool it’s debatable and they are more closer to being Solo Stakers than professional operators).

Every operator of size and a highly correlated setup is free to improve its setup and make it more fault tolerant (=costs money). At a certain point it simply becomes cheaper to deal with penalties by reducing the internal correlations, thus allowing for a slightly less fault tolerant setup.

This can’t possibly be true. A client with 30% of the network suffering a consensus split is indistinguishable to the rest of the network from a node operator with 30% of the validators being offline. They would suffer the same correlation penalties.

I meant in the long run. But yeah, the example I gave isn’t the best, I admit.
Basically, what I tried to say was “minority client users are better of than majority client users and anti-correlation penalties push majority users towards joining the minority”.

I generally agree with this analysis and the direction, but not necessarily the robustness of the results of the quantitative analysis.


The unidentified 15% is a gaping black hole that benefits from the correlation penalties. In the Figure Avg. Correlation Penalties - Validator Clusters there seems to be a correlation (hehe) between small size clusters and the unidentified. Intuitively, do you think it’s safe to assume that this 15% number would be closer to “small size clusters” or “mid size clusters”? Why or why not?

Also in the Figure Avg. Correlation Penalties - Validator Clusters, would be curious to know how different the mean/median values are. If there is a significant difference, this would be another potential red flag in the analysis.

I’m very sure that the unidentified 15% consist of mainly small-size entities. This is because the underlying data set using very reliable label data for large entities while tagging solo stakers highly(!) conservative.
I parsed and processed the data myself, thus I’m very sure that it misses out on a large number of solo stakers (check hildobby’s dune dashboard for reference and rated for a less conservative number.

Apart from that, the graphs do show a quite clear improvement for solo stakers (and unidentified, where I’m sure its solo stakers + very small size entities), in contrast to all the known more centralized entities.

If we assume an normal distribution in misses/no-misses per entity, over the long run, the median and mean should converge.

1 Like

where does your dataset come from and is it publicly reproducible?

It would need to be accessible to thoroughly compare it to hildobby’s dashboard methodology

From my own node and yes, just parse all attestations between epoch 263731 (Feb-16-2024) and 272860 (Mar-28-2024). Just note that it’s big amounts of data and not very handy to deal with.

What would need to be accessible? I was talking about comparing the share of unidentified.

I meant - where does your validator labeling dataset come from?

where does your (validator labeling) dataset come from and is it publicly reproducible?

Oh sorry, it’s from here. Basically, hildobby with assistance of others tagging doxxed addresses/reaching out to individual enties to ask for their deposit patterns.
https://github.com/duneanalytics/spellbook/blob/3bcef8deb8bd85d7f76341d583834207ff6616b8/models/staking/ethereum/staking_ethereum_entities.sql

1 Like

Could you please add a graph of the penalty factor over the time range?

I’m interested to see if we get near the max penalty factor with this update algorithm. It feels to me like a max penalty factor of 4 is likely too low to properly disincentivize tail risks, and the max penalty should be allowed to go much higher

1 Like

Don’t have it right now but I remember that the cap was oftentimes hit, somethimes it would have been >100 penalty. So, I agree, there might be room to increase that cap.

I think this is a great point. I am also interested to see if we get near the max penalty factor with this update algorithm. I also agree a max penalty factor of 4 is probably too low. Would definitely like to see a graph of the penalty factor over the time range. Nice post.

I think things that make improvements on average can also be acutely detrimental. It’s true that picking the smallest possible network share client reduces your expected penalties, assuming all clients are equally likely to split.

However, at the actual time of the split, node operators who observe that they are “offline” (they trust their favorite beacon chain indexer, which is running a different client) are likely to switch clients. They are perhaps more likely to switch clients if the penalties are more severe (they would be more severe). This makes them, perhaps, more likely to switch to a client that gains a supermajority and restores finality. Perhaps before there is L0 consensus on which chain is canonical.

The odds that a network with, say, 3 major execution clients suffers a split in which only 1 of them is on the canonical chain are low, but not impossible. It’s not impossible that the only client on the canonical chain is a small market share client, and is not finalizing. It’s not even improbable that a client with a larger share will either finalize an incorrect chain during this time, or gain enough share to finalize an incorrect chain.

I’m happy to put the black swan events to the side, for now, though.


Let’s talk about the additional costs a sizable node operator would incur to improve the reliability of their setup…

If it were me, I’d start by renting baremetal hardware from 3 geo-distributed datacenters. This would probably cost at most $6,000 per year. I’d set up a dvt cluster with 2/3 signature thresholds and diverse clients. While I may still experience the occasional fumble, I’ve likely eliminated misses in all but vanishingly rare cases.

With Eth at 3k and a staking APR of 3%, I would break even between 2 and 3 validators.

The largest Lido operators, for instance, have over 7,000 validators, for which they earn 5%, or the full rewards from 350 validators, about 336 eth per year, or about a million dollars.

So their costs have gone up by 0.6% of their revenue, and now they’re robust and safe against correlation penalties.

Meanwhile, for small operators:

  1. Staking is strictly riskier than before, as they can be swept up in a correlation event with a neglectful large operator, or in a client bug.
  2. If they want a fault tolerant setup, they have to be earning at least 2 eth per year more than their previous break-even, meaning about 2 extra validators worth of rewards, or 64 eth, or almost $200k additional stake.

For a solo staker with just one validator, a fault-tolerant setup is financially out of reach. For a large operator, it costs nothing. If you assume everyone will act in their financial best interests, you have large operators become iron-clad at the expense of small operators taking on additional risks or being pushed out.

The proposal and the discussion refers often to the increased costs that large operators will incur to make their setups robust, but It seems so marginal to me that it isn’t worth mentioning, except in the context of small operators getting priced out.

This assumes that we would finalize an invalid state thorugh a supermajority client and then continue from there , which I wouldn’t agree with. Every entity has a clear incentive to switch from a supermajority client to another client, not only for the health of the entire network but also to prevent those catastrophic scenarios in which a 2/3 of the total ETH staked are slashed and the minority chain continous to operate.

Regarding your second point: as a solo staker you’re way too small to be afraid of correlation penalties. Also remember that the effects of solo stakers would have been positive as the penalties do not only scale up but also down. In the long run, solo stakers running <10 nodes on one machine might completely ignore anti-correlation penalties as they’re way too small to cross the threshold required for the correlation penalties to kick in.

As soon as you have more validators on one non-very-fault-tolerant setup than the avg. of the most recent 32 slots (in accordance to Vitalik’s initial proposal), you should become worried about higher penalties.

I don’t agree here. Diversification is very expensive. Making a system (that is already super robust) even more robust is expensive too. Hardware, Backups, Stronger EPUs, DVTs, insurance, etc. everything becomes more expensive and in the long run, every small % matters here.

This assumes that we would finalize an invalid state thorugh a supermajority client and then continue from there

Assuming we don’t bail out the majority chain still leads to the outcome of chain-hoppers being slashed. My point is that it is best to never form the supermajority, and additional downtime penalties may increase the likelihood that it does form. Regardless of what happens after.

I understand that you think correlated penalties improve client diversity in a steady state and I agree. I think they also improve client choice volatility in the edge cases.

Regarding your second point: as a solo staker you’re way too small to be afraid of correlation penalties.

I disagree- as a solo validator, you are subject to the same proportionate penalties if you are offline in a slot a large operator goes offline in.

It is not likely that you will be offline in the exact slot that thousands of other validators go offline (barring client bugs), but it is also a risk that exists that did not exist before. For example, a home staker who is traveling may have several days of downtime. If they are assigned an attestation duty in the same slot a large institution goes offline, they are “swept up” in that event and face correlation penalties.

This means they can’t be ignored by small stakers. They represent an additional risk that did not previously exist.

Diversification is very expensive

As demonstrated, it costs a few thousand dollars a year to go from a single instance setup to a 2/3 DVT cluster, which is a large leap forward in robustness. Compared to the rest of your business expenses (paying sysadmins, security experts, etc), I believe it’s something you can stomach as a large institution. As a solo staker, I agree that it is very expensive.

Correlated attestation penalties selectively penalizes larger entities upon every failure, but smaller entities are only penalized sporadically when they fail at the same time (ideally, their failures are not correlated). If the total distributed penalties stay the same (by lowering non-correlated penalties), the smaller entities benefit. The solo stakers who goes offline for a longer time while traveling will in this scenario benefit, because they are only subjected to the larger penalty sporadically, but the average penalty they are subjected to is lower. Correlated attestation penalties can have other issues, as outlined by for example @vshvsh, but this is not one of them.

1 Like

Sure, if there’s a consummate decrease to a non-correlated penalty, then it becomes a 0-sum game favoring smaller operators. I don’t necessarily see that in the proposal, though.

Without a decrease, it isn’t a 0-sum game, which means that smaller operators may simply choose to exit.

It’s in the spirit of the proposal. See for example here:

1 Like