Executable beacon chain

Perhaps, the wording regarding Phase2 is confusing. The assumption is that rollup-centric roadmap makes eth1 as the only execution thread for a longer period than it was previously planned with Phase2. The option of scaling the execution on L1 is not excluded by this proposal.

Right, but what if there is eth1 nodes market that provides access to eth1 state transition and block production for Tx fees? We may think of centralisation risk here but eth1 Tx fees could be enough for relatively high number of independent parties to run their own nodes and provide such a service mitigating the risk.

This is a good point! I agree with you and @matt that we can’t change the semantics of BLOCKHASH and rather introduce a BEACONBLOCKROOT opcode for proofs verification. Preserving BLOCKHASH semantics addressing both, randomness and blocks identity, is an open question.

It definitely need to prove stability. And executable beacon chain will highly likely not happen in 2021. Technically with eth2 light client contract on eth1 some of the use cases that require bi-directional communication becomes possible.

Well I am a bit confused now …

Why does one need to run a full PoW node if the PoW consensus is no longer valid?

My understanding that you would need to either add EVM and the historic ETH1 state to ETH2 clients, or run and ETH1 node in parallel to ETH2 node, but only the part of it that does EVM, not PoW consensus …

Good point! This restriction is addressed by the following note:

I totally agree with these points! Tight coupling eth1 and eth2 by synchronous state accesses puts a big restrictions on upgradability. Making such a change requires a clear path towards execution scalability and we definitely should use less restrictive asynchronous model, at least at the beginning.

This is probably a good path to follow. RANDAO mix is embedded into eth1 block header (into extra data field or whatever else) by eth1-engine. Eth1 block execution takes 200ms in average which restricts the number of potential re-rolling the dice attempts by introducing a risk of loosing proposer reward and hence transaction fees if block is not propagated in time.

Come to think of it I’m not sure this works - Couldn’t a validator bias the randomness of the block hash after getting the randao number by grinding any other part of the header that isn’t part of the block execution? (Or do something at the end of the block after the 200ms?)

If so then it seems like you have to choose between just replacing the block hash with a random number, which preserves the use of the block hash for contracts that use it as a ghetto (moderately expensive-to-bias) random number generator, and preserving its use for already-deployed existing contracts that want to prove things about the block.

We can avoid this by including eth1 block hash onchain and validating that unused fields are e.g. filled with zeros. Though, coinbase can be manipulated in any case.

He means run eth1 state and tx verification, but with the the consensus driven by the beacon chain.

The software architecture of this might very well look like an eth2 client and portions of an eth1 client (often called an eth1-engine) running adjunct on the same system, where the eth2-client drives consensus and the eth1-engine handles user layer validity (state, txs, etc).

See this post for a high level on the division of concerns – Eth1+eth2 client relationship

1 Like

Current proposal makes a lot of sense for developers and Ethereum architecture in common. ETH1-shard is actually a new entity type and creating new types usually is not a good idea. We already have two different types: beacon and shard. Merging this new type to existing one is pretty obvious and eliminates future pain of managing and supporting special ETH1-shard.
Moreover, beacon chain merging will protect ETH1 state data. It will be important for the moment when PoW miners start to leave ETH1 chain and hash rate drops dramatically.

Is this only an issue when synchronously writing to beacon state? It seems like we can get away with synchronous reads.

Reading from the post beacon state of slot N in eth1 during slot N does seem reasonable and backwards compatible in many future designs.

Specifically if eth1 went into a disconnected shard, and read from slot N, it would need to be staggered within the slot (e.g. 4 seconds into the slot).

The design that does not handle slot N reads is if the eth1 shard were to be executed at the start of slot N at the exact same time as the beacon chain.

I think there are three things that need to happen in order for ETH2 to be viewed as a reasonably secure, so people can start moving money into it.

  1. Hardware protection of crypto keys. ETH2 is starting as a network where keys are stored in plain text on AWS, meaning that the entire network can be globally hacked in case of, say, Linux exploit, or AWS exploit, or AWS rogue employee etc.

  2. Way more testing and analysis needs to be done for adversarial insider attacks, including DoS attacks.

  3. There has to be a formal governance model that lets people transparently make proposals and vote on them. PoW networks arguably can function without a governance model, but PoS networks cant.

People that hold money on ETH1 need to be explained why the beacon chain is secure and how the beacon chain is going to be governed before they move money into it

Particularly worried about this…
Especially given the need for trustless staking pools, and poll results that suggest many hobbyists plan to use resource-restricted devices x.com

Agree with this. You can definitely read beacon state, even the post-state of the previous slot. It’s the writing that we need to be more careful with.

Is the only eth1 → beacon state writing that we are concerned with deposits? If so, we could just keep the existing phase0 approach, and just replace eth1 data voting with directly looking at the previous slot’s post-state root.

It would be good to use the opportunity of synchronous execution to get rid of redundant data in a block. IMO, we have more freedom to move back and forth on a system level in oppose to the user’s level, i.e. beacon chain operations vs opcode semantics. Changing deposit processing might affect block explorers and other infra, though.

We can get rid of most redundant data without getting rid of asynchrony. For example, instead of deposit branches being a full 1 kB, we can have a separate deposit tree for each block, so branches would on average be only ~200 bytes including the DepositData object.

Or we could also not bother doing this optimization work to minimize the complexity of the transition, as plenty of people want the transition to happen quickly and safely… would be interesting to see how much complexity is added from even this per-block deposit tree change.

1 Like

Really cool proposal! This removes some of the dependency complexity between
phase 1 and the merge.

1 Like

And what about make that eth1 block computation optional? Light Eth2 validators could keep validating as now introducing an empty eth1 block data while heavy validators could be incentived to validate also eth1. Idk about the implications of this idea, is just a thought I had

1 Like

Optional computation would make eth1 only partially secured by eth2 stake which is undesirable.