My main concern with the proposal as currently written is that it seems to degrade the UX for home stakers. Based on my reading of the code in your current proposal, if you’re a home staker with a single validator and you opt into being a compounding validator, you won’t experience a withdrawal until you’ve generated MAX_EFFECTIVE_BALANCE_MAXEB - MIN_ACTIVATION_BALANCE ETH, which (based on your 11 year calculation) would take ~66 years.
Speaking for myself, I don’t think I’d want to opt into this without some way to trigger a partial withdrawal before reaching that point. You have to pay taxes on your staking income after all
Off the top of my head, I can think of 2 ways to mitigate this:
Enable MAX_EFFECTIVE_BALANCE_MAXEB to be a configurable multiple of 32 up to 2048 by either adding a byte to every validator record or utilizing the WITHDRAWAL_PREFIX and reserving bytes 0x01…0x40 to indicate the multiple of 32.
Hey @mikeneuder thanks for the clarification! I have to admit that I only read your post, I didn’t click through to the spec change PR. The 0x02 credential is the first thing to pop up there
At first glance, a withdrawal credential change sounds like a great way to make this proposal opt-in while leaving the original functionality unchanged, but there are hidden costs.
Although this isn’t an objection, it’s worth noting that suggestions to add optional credential schemes are a philosophical departure from 0x01, which was necessary. While the conception of 0x00 makes sense historically, today it makes little sense to create 0x00 validators. Put another way, if Ethereum had been given to us by advanced aliens, we’d only have the 0x01 behavior. At least the Ethereum community, unlike Linux, has a view into the entire user space, so maybe one day 0x00 usage will disappear and can be removed safely. Until then, though, we’re stuck with it. Do we really want to further segment CL logic and incur that tech debt for an optional feature? Again, not an objection per se, but something to consider.
Regardless, I suspect that achieving this via EL-initiated partial withdrawals is better because users will want compounded returns anyway, even with occasional partial withdrawals.
Optimal workflow if MAX_EFFECTIVE_BALANCE is increased for all users after EL-initiated partial withdrawals are available:
combine separate validators (one-time process)
partially withdraw when bills are due
repeat step 2 as needed, compound otherwise
Optimal workflow if MAX_EFFECTIVE_BALANCE is increased for an optional 0x02 credential:
combine separate validators (one-time process)
exit entirely when bills are due
create an entirely new validator
go back to step 2 and 3 as needed, compound otherwise
Even if the user and CL costs end up similar under both scenarios, the first UX seems better for users and the network. The 0x02 path may only be worthwhile if validator set contraction is truly necessary in the short term. Otherwise, we have a better design on the horizon.
Absolutely the UX is a critical component here. The initial spec proposal was intentionally tiny to show how simple the diff could be, but it is probably worth having a little more substance there to make the UX better. We initially wrote it so that any power of 2 could be set as the ceiling for a validator, so you could choose 128 to be where the sweep kicks in. This type of question I hope we can hash out after a bit more back and forth with solo stakers and pools for what they would make use of if we implement it
thanks for the thorough response @Wander ! I agree that the first workflow sounds way better! Again we mainly made the initial proposal with the 0x02 credential to make the default behavior unchanged, but if we can get a rough consensus with everyone that we can just turn on compounding across the board with EL initiated partial withdrawals, then maybe that is the way to go! (it has the nice benefit of immediately making the withdrawal queue empty because now withdrawals have to be initiated and there is no more sweep until people hit 2048 ETH).
Noting that reducing state size also facilitates (or unlocks depending who you ask) another item from the roadmap. Single Secret Leader Election, with any possible construction would require a significant increase in state size (current Whisk proposal requires a ~2x increase)
@mikeneuder my mistake, I was under the impression that raising the effective balance would alter the real-world reward dynamics but in light of DrewF’s explanation I stand corrected. What - if any - impact on RANDAO bias-ability? Is the current system implicitly assuming that each randao_reveal is equally likely, and if so how would the higher “gravity” of large effective balances play out?
Excellent proposal, especially raising the node cap to 2048 (or even 3200, seems entirely reasonable to me). Currently on the Beacon Chain, the addition of new nodes requires an excessively long wait time. For instance, on June 18th, there were over 90,000 nodes in queue, and they needed to wait for 40-50 days, which is extremely disheartening for those new to joining ETH consensus.
In fact, from my personal interactions, I’ve come across many users who hold a significant amount of ETH. Considering the daily limit on nodes the consensus layer can accommodate, if one individual holds 2000 ETH, under this new proposal, they would only occupy 1 node instead of 62-63 nodes. This could potentially increase the efficiency of joining the node queue by 10x or even 20x, enabling more people to join staking at a faster rate. This also reduces people’s reliance on centralized exchanges for staking (simply bcuz there is no waiting time in cexs), which would make ETH network more robust.
I agree with Yorick that the estimated benefits to larger staking protocols/solutions/operations are marginal, and the added complexity actually makes overall dynamics more difficult and definitely introduces new types of operational complexity (especially for staking protocols).
This proposed changes would have wide-reaching effects and require changes at the root level of staking protocols (definitely from a technical perspective, as well as from an economic perspective). To name a few more in addition to those Yorick outlined:
re-thinking of stake allocation mechanisms (how fast do you do deposits to the BC deposit contract)? Do you still do it every 32ETH? Do you do 2048 chunks only (to save on gas) (if so do you accept larger “floats” in deposit buffers (i.e. rewards dilution)?
Re-factoring calculations where number of validators was an effective & accurate proxy for “stake share” that each participant/operator may have in a staking protocol
bonding complexity (how do you appropriately size a bond for a validator that may have from 32 to 2048 ETH as a balance)
preventing excessive churn when needing to exit validators for withdrawals (e.g. exits for anything less than 2048 ETH withdrawal request will cause unnecessary churn)
addressing the general churn that will happen in the validator set when/if something like this goes live (given the expressed purpose of “reducing the number of active validators”, it’s clear that it’s desired to be implemented by a lot or even most validators)
it’s very difficult to do this rotation, unless you have a complex mechanism whereby you “merge” 64 existing 0x01 validators into one 0x02 validator. There’s effects here where e.g. if you “start” the validator with 32ETH you won’t get partial rewards until there’s a full 2048 ETH in the validator. For a solo operator this is fine, but e.g. for large staking protocols steady flows of partial rewards are a good way to make sure there’s buffer for withdrawals, so you want the validators to be constantly at the “partially withdrawable” state, which if you convert tens of thousands of validators and they start with 32 ETH balance it will take forever.
I think it’s that something like this is borderline necessary for SSF to come sooner rather than later. Given the rapid growth of the validator set and p2p networking dynamics, it also makes sense. Unlocking enshrined PBS is awesome, too.
My concern here is that (due to this pressure to unlock important upgrades) this proposal is “sped-run” before everything that needs to be thought through out can be, and it’s also really difficult to “simulate” what effects this would have on the ecosystem to fully consider all substantial implications without actually going through and doing all of the work. The corollary to this is that on-chain staking protocols (I imagine) would like to ossify ASAP and become immutable (or at least make various components un-upgradeable). This proposal would extend the timeline to be able to do something like that substantially.
hey @CBobRobison. this is a really interesting point. i could totally see this as a valuable addition to the spec change. we could either allow them to set a custom lower bound or hard cut it at 1/2 of the validator’s respective ceiling. e.g., a 512 validator gets ejected at 256. i could also see the argument for ejecting at 16 ETH below their ceiling.
thanks for bringing it up! it is something ill think more about.
I think this proposal is totally needed for several reasons:
First, from the perspective of environment - running thousands of copies of exactly the same software thing by the same validator adds little to decentralization, but ineffectively spends lots of energy and computer resources.
Second, it could actually help decentralization, because if there is a large ETH investor, they can run a single node which is easy, compared to them running a large number of nodes. One of the reasons people do not run their own nodes is because of the need to split the stake into multiple nodes.
these concerns are totally valid. i think especially with regards to the UX considerations, we are working through some improvements. for one, we plan on allowing top-ups past 32 ETH while setting the balance ceiling in the deposit. this would allow a 32 ETH validator to be converted to a 2048 ETH validator just through the top up mechanism (still rate limited by the churn ratio). additionally, we are discussing allowing validators to set arbitrary ceilings. e.g, at any value v \in [32, 2048] , the validator can specify that partial withdrawals kick in. hopefully this would allow validators to tune for their risk preferences.
My concern here is that (due to this pressure to unlock important upgrades) this proposal is “sped-run” before everything that needs to be thought through out can be, and it’s also really difficult to “simulate” what effects this would have on the ecosystem to fully consider all substantial implications without actually going through and doing all of the work.
sorry it feels like that! this certainly isn’t something we are trying to force through without doing the due diligence. this is just the start of the conversation and as we evolve it into an EIP and continue the discourse, this research should have plenty of time to be executed. as far as ossification of on-chain staking protocols, i don’t think that is reasonably attainable given the continued development of the consensus layer. if we are actually working towards SSF, then a significant change needs to take place, whether that is capping the validator set, allowing consolidation, implementing a rotating set, etc.
forgive the ramble, but here goes… as I recall, PoS had originally contemplated a 2k ETH stake as the minimum, then adjusted down to 32 ETH mostly as a result of price action and to ensure there would be sufficient decentralization by increasing participation. Where we are today, we see basically an engineering challenge emerging of too many validators needing to have signature aggregation (as I understand) and thereby an exponential networking issue, hence this proposal.
From my lens, I view this as an elegant way to allow the validator balance to increase (and be economically incentivized to allow to keep the ETH in the validator). On a software dev side (and not a dev, not by a long-shot), I am more concerned about how to determine 32+ ETH uptime / slashing impacts… sync committee participation rates…etc.). lets for a moment assume all of those technical challenges are solved… what we are really talking about is a middle-ground between where we started 2k per validator… and where we launched… 32 eth per validator… (and still keeping the minimum to ensure decentralization is always maintained… and trying to come up with a simple way to not have so many validators (and as a result reduce the exponential signature network load)… note this has nothing to do with number of nodes… If a large staking pool has thousands of validators (of 32ETH each) all running with one node, that pool will likely enable withdrawals and cause more validators to be added to the validator pool (thus amplifying the challenge)
from my understanding, this proposal would only require client SW changes along with a fork… but the deposit contract in used today could be used after as well… it would in theory allow an end user to put 32 ETH at stake or 320ETH or 1004 and be proportionally economically incentivized. Then as that stake increases with uptime and doing validator work, it would efficiently compound where today that compounding is mostly reserved for whales or LST by needing to withdraw and cause another validator to be spun up. Many folks look at this proposal and see it serving the whales, and while not wrong, the real winner here are the validators that have 32ETH and dont want to withdrawal. (my 2 gwei)
600k validators are “redundant”. they are running on the same beacon node and controlled by the same operator; the 60k coinbase validators are logically just one actor in the PoS mechanism.
This really doesn’t reflect the focus nor complexity of large scale validator operators who deliberately spend time making sure they use a diversity of clients, regions and clouds to ensure the above is not the case.
More broadly, I’m not sure why we’re focusing on improving conditions for those who have > $50K to spin up a validator when I think we should really be spending time on how to enable validators to run with a smaller balance and thus further encourage solo and small scale stakers. I’d rather we focus on lowering barriers to access rather than improving the lot of those with large amounts of ETH.
To be clear, I don’t think there’s anything being forced through, and in general I think the proposal makes sense and is (from a networking perspective) a net positive. I’m just opining that it’s wide-sweeping and it’s going to “touch” a lot of things that are complex to reason about and basically impossible to model/simulate, so there’s a lot to make sure we think about properly before seriously entertaining.
I thought about this as well! It’s interesting but there’s a lot of complexity to think through. Which credentials would be used to sign the message for the conversion? Depending on the answer (and the staking protocol) there are possible risk vectors introduced (for both options, whether it’s the validator keys or the withdrawal credentials).
An interesting observation: taking the approach of “convert 32ETH MAX into 2048 MAX and then top up” effectively means is that until these validators are each topped up to their MAX, you will stop getting skimming rewards. For protocols (like Lido) that make use of a constant stream of partial rewards to fill an EL “buffer” on a daily basis that can be used for withdrawals, it effectively means a (temporary) reduced “buffer replenishment rate” which translates into potentially (depending on stake movements) stake churn for a temporary period of time, as well as increased rewards dilution. This obviously isn’t a blocker for such a proposal or mechanism, but just a simple example of a consideration and impact on protocol dynamics.
This is also really neat, but ultimately would introduce quite a bit of complexity to staking protocols (from accounting perspective). Would these ceilings be settable only once (at “conversion” or validator “creation”), or mutable?
On a general note, due to how much complexity (especially from an accounting perspective) this proposal would introduce, do we have an idea what additional data may be included in the beaconstateroot to make it easier for EL-side mechanisms to reason about and calculate things? If protocols do not have an easy way of ascertaining what the CEILING/MAX for each validator and current effective balance is, I think there’s going to be a lot of difficulty (and introduction of reliance on trusted oracles for things we wanted to minimize via 4788).
It probably depends on a case by case basis, but e.g. I think changes to MEB are more impactful in this context than SSF. Capping, rotation, etc, are probably things that could be compartmentalized to go in smaller upgradeable modules while core functionality could be ossified, but stuff like effective balance is very integral to protocol operations, dynamics, and accounting.
We totally agree that big stakers are running a variety of client nodes, in different data centers, diff cloud providers, etc. But the point we are making here is that they are logically controlled by the same entity so from the PoS mechanism perspective, they are a single unit.
More broadly, I’m not sure why we’re focusing on improving conditions for those who have > $50K to spin up a validator when I think we should really be spending time on how to enable validators to run with a smaller balance and thus further encourage solo and small scale stakers.
The benefits of the proposal for large validators are secondary to the overall benefits! The key takeaway is that the validator set is too large and growing incredibly quickly. In fact, as is, we have no hope of lowering the minimum of 32 ETH to become a validator, because of this validator set growth. If we have a consolidation, that is actually a viable path to considering lowering the minimum that wouldn’t be possible otherwise.
One open question here is how proposer slashings should be handled.
If the minimum slashing penalty is increased according to the current effective balance, then it would deter consolidation, as penalties for consolidated validators would be higher than for minimum stake (32 ETH) validators.
However, using a different mechanism will clash with the attestation slashing, which have to be proportional for security reasons.
Therefore I propose we change the proposer slashing to a penalty:
Double proposer leads to a penalty, not a slashing. The amount is fixed and independent from the effective balance.
Repeated penalties for the same proposer are allowed.
Repeated penalties for the same slot are not allowed. (A bitfield is kept for the penalty period to record which slots have seen penalties)
A double proposal penalty does not lead to a validator exit.
There is no additional penalty according to the number of other validators being penalized/slashed.
Reducing the double proposer action from a slashing to a penalty is ok because no critical consensus properties depend on there being no double proposals.