Increase the MAX_EFFECTIVE_BALANCE – a modest proposal

@0xTodd The amount of eth is rate limited into and out of the active validator set for security reasons (security margin epoch-over-epoch degrades as eth enters/leaves). Not the number of validators.

In the event such a proposal goes into effect, the amount of ETH that can enter and leave per unit time would generally remain unchanged (unless the security margin was decided to be changed).

5 Likes

Hey @mikeneuder. This is a very thoughtful proposal.

You mentioned in your counter arguments the current ejection balance is 16 ETH.

Would you propose to maintain an ejection balance of 16 ETH with a MaxEB > 32 ETH?

Is there room in this proposal for a validator to set a custom ejection balance with the intent to limit their losses on a single validator?

One could imagine a scenario where a solo staker loses access to their validator key and encounters a permanent hardware failure, while still maintaining access to their withdrawal address.

2 Likes

I agree with Yorick that the estimated benefits to larger staking protocols/solutions/operations are marginal, and the added complexity actually makes overall dynamics more difficult and definitely introduces new types of operational complexity (especially for staking protocols).

This proposed changes would have wide-reaching effects and require changes at the root level of staking protocols (definitely from a technical perspective, as well as from an economic perspective). To name a few more in addition to those Yorick outlined:

  • re-thinking of stake allocation mechanisms (how fast do you do deposits to the BC deposit contract)? Do you still do it every 32ETH? Do you do 2048 chunks only (to save on gas) (if so do you accept larger “floats” in deposit buffers (i.e. rewards dilution)?
  • Re-factoring calculations where number of validators was an effective & accurate proxy for “stake share” that each participant/operator may have in a staking protocol
  • bonding complexity (how do you appropriately size a bond for a validator that may have from 32 to 2048 ETH as a balance)
  • preventing excessive churn when needing to exit validators for withdrawals (e.g. exits for anything less than 2048 ETH withdrawal request will cause unnecessary churn)
  • addressing the general churn that will happen in the validator set when/if something like this goes live (given the expressed purpose of “reducing the number of active validators”, it’s clear that it’s desired to be implemented by a lot or even most validators)
    • it’s very difficult to do this rotation, unless you have a complex mechanism whereby you “merge” 64 existing 0x01 validators into one 0x02 validator. There’s effects here where e.g. if you “start” the validator with 32ETH you won’t get partial rewards until there’s a full 2048 ETH in the validator. For a solo operator this is fine, but e.g. for large staking protocols steady flows of partial rewards are a good way to make sure there’s buffer for withdrawals, so you want the validators to be constantly at the “partially withdrawable” state, which if you convert tens of thousands of validators and they start with 32 ETH balance it will take forever.

I think it’s that something like this is borderline necessary for SSF to come sooner rather than later. Given the rapid growth of the validator set and p2p networking dynamics, it also makes sense. Unlocking enshrined PBS is awesome, too.

My concern here is that (due to this pressure to unlock important upgrades) this proposal is “sped-run” before everything that needs to be thought through out can be, and it’s also really difficult to “simulate” what effects this would have on the ecosystem to fully consider all substantial implications without actually going through and doing all of the work. The corollary to this is that on-chain staking protocols (I imagine) would like to ossify ASAP and become immutable (or at least make various components un-upgradeable). This proposal would extend the timeline to be able to do something like that substantially.

hey @CBobRobison. this is a really interesting point. i could totally see this as a valuable addition to the spec change. we could either allow them to set a custom lower bound or hard cut it at 1/2 of the validator’s respective ceiling. e.g., a 512 validator gets ejected at 256. i could also see the argument for ejecting at 16 ETH below their ceiling.

thanks for bringing it up! it is something ill think more about.

1 Like

I think this proposal is totally needed for several reasons:

First, from the perspective of environment - running thousands of copies of exactly the same software thing by the same validator adds little to decentralization, but ineffectively spends lots of energy and computer resources.

Second, it could actually help decentralization, because if there is a large ETH investor, they can run a single node which is easy, compared to them running a large number of nodes. One of the reasons people do not run their own nodes is because of the need to split the stake into multiple nodes.

hey @isidorosp! thanks for your response :slight_smile:

these concerns are totally valid. i think especially with regards to the UX considerations, we are working through some improvements. for one, we plan on allowing top-ups past 32 ETH while setting the balance ceiling in the deposit. this would allow a 32 ETH validator to be converted to a 2048 ETH validator just through the top up mechanism (still rate limited by the churn ratio). additionally, we are discussing allowing validators to set arbitrary ceilings. e.g, at any value v \in [32, 2048] , the validator can specify that partial withdrawals kick in. hopefully this would allow validators to tune for their risk preferences.

My concern here is that (due to this pressure to unlock important upgrades) this proposal is “sped-run” before everything that needs to be thought through out can be, and it’s also really difficult to “simulate” what effects this would have on the ecosystem to fully consider all substantial implications without actually going through and doing all of the work.

sorry it feels like that! this certainly isn’t something we are trying to force through without doing the due diligence. this is just the start of the conversation and as we evolve it into an EIP and continue the discourse, this research should have plenty of time to be executed. as far as ossification of on-chain staking protocols, i don’t think that is reasonably attainable given the continued development of the consensus layer. if we are actually working towards SSF, then a significant change needs to take place, whether that is capping the validator set, allowing consolidation, implementing a rotating set, etc.

2 Likes

I am all for this proposal…

forgive the ramble, but here goes… as I recall, PoS had originally contemplated a 2k ETH stake as the minimum, then adjusted down to 32 ETH mostly as a result of price action and to ensure there would be sufficient decentralization by increasing participation. Where we are today, we see basically an engineering challenge emerging of too many validators needing to have signature aggregation (as I understand) and thereby an exponential networking issue, hence this proposal.

From my lens, I view this as an elegant way to allow the validator balance to increase (and be economically incentivized to allow to keep the ETH in the validator). On a software dev side (and not a dev, not by a long-shot), I am more concerned about how to determine 32+ ETH uptime / slashing impacts… sync committee participation rates…etc.). lets for a moment assume all of those technical challenges are solved… what we are really talking about is a middle-ground between where we started 2k per validator… and where we launched… 32 eth per validator… (and still keeping the minimum to ensure decentralization is always maintained… and trying to come up with a simple way to not have so many validators (and as a result reduce the exponential signature network load)… note this has nothing to do with number of nodes… If a large staking pool has thousands of validators (of 32ETH each) all running with one node, that pool will likely enable withdrawals and cause more validators to be added to the validator pool (thus amplifying the challenge)

from my understanding, this proposal would only require client SW changes along with a fork… but the deposit contract in used today could be used after as well… it would in theory allow an end user to put 32 ETH at stake or 320ETH or 1004 and be proportionally economically incentivized. Then as that stake increases with uptime and doing validator work, it would efficiently compound where today that compounding is mostly reserved for whales or LST by needing to withdraw and cause another validator to be spun up. Many folks look at this proposal and see it serving the whales, and while not wrong, the real winner here are the validators that have 32ETH and dont want to withdrawal. (my 2 gwei)

1 Like

600k validators are “redundant”. they are running on the same beacon node and controlled by the same operator; the 60k coinbase validators are logically just one actor in the PoS mechanism.

This really doesn’t reflect the focus nor complexity of large scale validator operators who deliberately spend time making sure they use a diversity of clients, regions and clouds to ensure the above is not the case.

More broadly, I’m not sure why we’re focusing on improving conditions for those who have > $50K to spin up a validator when I think we should really be spending time on how to enable validators to run with a smaller balance and thus further encourage solo and small scale stakers. I’d rather we focus on lowering barriers to access rather than improving the lot of those with large amounts of ETH.

3 Likes

This would kill solo staking outright, and arguably that’s the most important mechanism that keeps ethereum decentralized.

To be clear, I don’t think there’s anything being forced through, and in general I think the proposal makes sense and is (from a networking perspective) a net positive. I’m just opining that it’s wide-sweeping and it’s going to “touch” a lot of things that are complex to reason about and basically impossible to model/simulate, so there’s a lot to make sure we think about properly before seriously entertaining.

I thought about this as well! It’s interesting but there’s a lot of complexity to think through. Which credentials would be used to sign the message for the conversion? Depending on the answer (and the staking protocol) there are possible risk vectors introduced (for both options, whether it’s the validator keys or the withdrawal credentials).

An interesting observation: taking the approach of “convert 32ETH MAX into 2048 MAX and then top up” effectively means is that until these validators are each topped up to their MAX, you will stop getting skimming rewards. For protocols (like Lido) that make use of a constant stream of partial rewards to fill an EL “buffer” on a daily basis that can be used for withdrawals, it effectively means a (temporary) reduced “buffer replenishment rate” which translates into potentially (depending on stake movements) stake churn for a temporary period of time, as well as increased rewards dilution. This obviously isn’t a blocker for such a proposal or mechanism, but just a simple example of a consideration and impact on protocol dynamics.

This is also really neat, but ultimately would introduce quite a bit of complexity to staking protocols (from accounting perspective). Would these ceilings be settable only once (at “conversion” or validator “creation”), or mutable?

On a general note, due to how much complexity (especially from an accounting perspective) this proposal would introduce, do we have an idea what additional data may be included in the beaconstateroot to make it easier for EL-side mechanisms to reason about and calculate things? If protocols do not have an easy way of ascertaining what the CEILING/MAX for each validator and current effective balance is, I think there’s going to be a lot of difficulty (and introduction of reliance on trusted oracles for things we wanted to minimize via 4788).

It probably depends on a case by case basis, but e.g. I think changes to MEB are more impactful in this context than SSF. Capping, rotation, etc, are probably things that could be compartmentalized to go in smaller upgradeable modules while core functionality could be ossified, but stuff like effective balance is very integral to protocol operations, dynamics, and accounting.

1 Like

hi @bkd – thanks for the response!

We totally agree that big stakers are running a variety of client nodes, in different data centers, diff cloud providers, etc. But the point we are making here is that they are logically controlled by the same entity so from the PoS mechanism perspective, they are a single unit.

More broadly, I’m not sure why we’re focusing on improving conditions for those who have > $50K to spin up a validator when I think we should really be spending time on how to enable validators to run with a smaller balance and thus further encourage solo and small scale stakers.

The benefits of the proposal for large validators are secondary to the overall benefits! The key takeaway is that the validator set is too large and growing incredibly quickly. In fact, as is, we have no hope of lowering the minimum of 32 ETH to become a validator, because of this validator set growth. If we have a consolidation, that is actually a viable path to considering lowering the minimum that wouldn’t be possible otherwise.

1 Like

At the top of the post:

Critically, we do not propose

  1. increasing the 32 ETH minimum required to become a validator
3 Likes

One open question here is how proposer slashings should be handled.

If the minimum slashing penalty is increased according to the current effective balance, then it would deter consolidation, as penalties for consolidated validators would be higher than for minimum stake (32 ETH) validators.

However, using a different mechanism will clash with the attestation slashing, which have to be proportional for security reasons.

Therefore I propose we change the proposer slashing to a penalty:

  1. Double proposer leads to a penalty, not a slashing. The amount is fixed and independent from the effective balance.
  2. Repeated penalties for the same proposer are allowed.
  3. Repeated penalties for the same slot are not allowed. (A bitfield is kept for the penalty period to record which slots have seen penalties)
  4. A double proposal penalty does not lead to a validator exit.
  5. There is no additional penalty according to the number of other validators being penalized/slashed.

Reducing the double proposer action from a slashing to a penalty is ok because no critical consensus properties depend on there being no double proposals.

5 Likes

How big would the penalty be?

How does the network react to double proposals? Does it reorg out both, or does it just coalesce on one or the other over the next handful of slots?

On the topic of ejection at 16 effective, and ACDC 112 discussion of having this track max balance.

While this could be friendly towards unaware operators, it also introduces some additional complexity to the protocol.

With EIP-7002, EL-initiated exits, do we need it? This is for operators that have lost their validator signing key but still have control over their withdrawal address.

They can initiate an exit from the withdrawal address and get their funds.

If they don’t have control over their withdrawal address, they won’t get funds anyway - and in that case, psychologically, it may feel better that it’ll take years for eth to land on that address, and that the amount will be relatively smaller.

EL-initiated partial withdrawals would be a strictly worse solution for all validators than the system we have today, due to it costing gas.

This proposal is only really of use to large staking operations. Is there any buy-in from such operations that they would want to utilize this proposal? Are the issues being addressed here the issue that large staking operations consider to be pain points? If not, then it seems unlikely that they will configure validators with much larger balances than the minimum required, as they bring attendant risks (e.g. cost of a validator being slashed) without significant upside (key management was mentioned, but managing a handful of keys or a few thousand keys has much the same process). And the danger of encouraging them by favoring configurations of larger validators would create a centralization vector.

2 Likes

Thanks for your comment Jim! I want to push back on a few things

This proposal is only really of use to large staking operations.

I think the solo-staking benefits are also meaningful! Auto-compounding, staking more than 32 ETH but less than 64 ETH, and EL-partials of any size are all things that people have expressed a lot of interest in.

Are the issues being addressed here the issue that large staking operations consider to be pain points?

Yes! The biggest concern we have heard is around slashing risk. This is why we are considering changing how proposer slashings work to no be proportional to the size of the validator (attester slashings ~MUST~ be proportional b/c they impact the fork-choice rule based on their weight).

And the danger of encouraging them by favoring configurations of larger validators would create a centralization vector.

For sure, this is something I am keen to avoid (incentivizing consolidation).

We have talked to a number of large staking providers and most are interested and want to see the full EIP and design and would do their own risk analysis. I think its naive to expect them all to fully consolidate, but I also think its safe to say that the cap of 32 ETH is quite low and there would be no meaningful increase in risk if they bumped their average validator size to say 128 ETH or something similar.

Two other points:

  1. There is an economic incentive to turn on auto-compounding because then the ETH above 32 is immediately earning rewards. Waiting for the withdrawals sweep to take place and then the deployment of a new validator through the activation queue is quite slow, so turning on auto-compounding would be an immediate boost in capital efficiency for small and large validators alike.
  2. I also see this as a near-term fix to minimize the continued bloat of the validator set. Even if the absolute size of the validator set doesn’t shrink, at least we may see a reduction in the rate at which it’s growing. This would help provide more time as we assess longer-term designs for the validator set (e.g., rotating validator set or changing the issuance curve to make it less profitable).

Let me know what you think :slight_smile:

For a solo staker? At time of writing it would take over a year for a validator’s effective balance to increase from 32 ETH to 33 ETH based on consensus rewards. And in many jurisdictions there are requirements to pay tax on staking rewards, so the time would be increased proportionately. Although auto-compounding is nice in theory, I don’t think that the numbers show this as something that would have a meaningful impact on solo staking rewards.

If you have a single validator with 2048 ETH then if there is an operational incident that causes the validator to be slashed you’re done. If you have 64 validators with 32 ETH each then you will likely have a fair amount of time (up to 31 slots) in which to save some of the validators. Given that slashing risk is the biggest concern, I’ don’t see why large staking services would consolidate their risk in this fashion.

As per above, the idea of “immediately” earning anything isn’t going to happen for smaller stakers with the way that Ethereum staking works, and for larger stakers their risk is lower by having multiple individual validators.

1 Like

At time of writing it would take over a year for a validator’s effective balance to increase from 32 ETH to 33 ETH based on consensus rewards.

We could easily consider making the increment 0.1 ETH for example! we haven’t bundled that with this change, but it could make sense to give better granularity!

And in many jurisdictions there are requirements to pay tax on staking rewards, so the time would be increased proportionately.

This among other reasons is why we initially want to leave the sweep alone! If they want the continuous dust swept for tax reasons, they don’t have to opt-in.

Given that slashing risk is the biggest concern, I’ don’t see why large staking services would consolidate their risk in this fashion.

We have talked to many who said they would! It is 100% a risk adjustment thing, but i think its safe to say that not everyone would choose a ceiling of 32 ETH for their validators if they had the flexibility.

for larger stakers their risk is lower by having multiple individual validators.

Its a risk-reward tradeoff though. Consider a larger validator who ascends that effective balance schedule faster (because larger stakers earn ETH faster), then they have more capital efficiency because they don’t have to do the withdrawal and activation queue. Some staking services may have different risk preferences and would take advantage of the higher capital efficiency.

It might make sense to think about doing some research specifically on the economics at play here for small and large validators. :thinking:

Right, but do you understand why the effective balance increment is currently 1 ETH, and why we have effective balance separate from balance?

Again, a full spec is required so that this can be looked at in the round rather than individual points. Because the answer to the question “would you like larger validators, fewer keys to manage, and reduced server costs?” is likely to be “yes” but that answer could well change if it resulted in a significantly higher cost if they were part to a slashing event.

At a glance, that would likely be an very minimal difference for anyone at scale (>1000 validators).

2 Likes