Executable beacon chain

He means run eth1 state and tx verification, but with the the consensus driven by the beacon chain.

The software architecture of this might very well look like an eth2 client and portions of an eth1 client (often called an eth1-engine) running adjunct on the same system, where the eth2-client drives consensus and the eth1-engine handles user layer validity (state, txs, etc).

See this post for a high level on the division of concerns – Eth1+eth2 client relationship

1 Like

Current proposal makes a lot of sense for developers and Ethereum architecture in common. ETH1-shard is actually a new entity type and creating new types usually is not a good idea. We already have two different types: beacon and shard. Merging this new type to existing one is pretty obvious and eliminates future pain of managing and supporting special ETH1-shard.
Moreover, beacon chain merging will protect ETH1 state data. It will be important for the moment when PoW miners start to leave ETH1 chain and hash rate drops dramatically.

Is this only an issue when synchronously writing to beacon state? It seems like we can get away with synchronous reads.

Reading from the post beacon state of slot N in eth1 during slot N does seem reasonable and backwards compatible in many future designs.

Specifically if eth1 went into a disconnected shard, and read from slot N, it would need to be staggered within the slot (e.g. 4 seconds into the slot).

The design that does not handle slot N reads is if the eth1 shard were to be executed at the start of slot N at the exact same time as the beacon chain.

I think there are three things that need to happen in order for ETH2 to be viewed as a reasonably secure, so people can start moving money into it.

  1. Hardware protection of crypto keys. ETH2 is starting as a network where keys are stored in plain text on AWS, meaning that the entire network can be globally hacked in case of, say, Linux exploit, or AWS exploit, or AWS rogue employee etc.

  2. Way more testing and analysis needs to be done for adversarial insider attacks, including DoS attacks.

  3. There has to be a formal governance model that lets people transparently make proposals and vote on them. PoW networks arguably can function without a governance model, but PoS networks cant.

People that hold money on ETH1 need to be explained why the beacon chain is secure and how the beacon chain is going to be governed before they move money into it

Particularly worried about this…
Especially given the need for trustless staking pools, and poll results that suggest many hobbyists plan to use resource-restricted devices https://twitter.com/terencechain/status/1332752009162760193

Agree with this. You can definitely read beacon state, even the post-state of the previous slot. It’s the writing that we need to be more careful with.

Is the only eth1 -> beacon state writing that we are concerned with deposits? If so, we could just keep the existing phase0 approach, and just replace eth1 data voting with directly looking at the previous slot’s post-state root.

It would be good to use the opportunity of synchronous execution to get rid of redundant data in a block. IMO, we have more freedom to move back and forth on a system level in oppose to the user’s level, i.e. beacon chain operations vs opcode semantics. Changing deposit processing might affect block explorers and other infra, though.

We can get rid of most redundant data without getting rid of asynchrony. For example, instead of deposit branches being a full 1 kB, we can have a separate deposit tree for each block, so branches would on average be only ~200 bytes including the DepositData object.

Or we could also not bother doing this optimization work to minimize the complexity of the transition, as plenty of people want the transition to happen quickly and safely… would be interesting to see how much complexity is added from even this per-block deposit tree change.

1 Like

Really cool proposal! This removes some of the dependency complexity between
phase 1 and the merge.

1 Like

And what about make that eth1 block computation optional? Light Eth2 validators could keep validating as now introducing an empty eth1 block data while heavy validators could be incentived to validate also eth1. Idk about the implications of this idea, is just a thought I had

1 Like

Optional computation would make eth1 only partially secured by eth2 stake which is undesirable.

I’m not sure anyone would actually want to propose an empty block and leave that revenue on the table when they could instead outsource the assembly of the block to someone else and keep the fee revenue. A validator’s ETH1 light client should be enough to verify that a block that someone else has assembled is valid before signing off on it. There should be no shortage of people volunteering to assemble a block and collect all that juicy Miner Extractable Value.

This ends up looking sort-of like a kind of Stateless Ethereum, except that instead of the state proofs being supplied by either the transactor or the validator, they’re supplied by this new kind of “block assembler”. (I’m not sure if this role already has some kind of existing name in the MEV Auction discussions.)

As @mkalinin says when discussing this as an “eth1 nodes market” it’s probably not disastrous if this function is less decentralized than validation, provided the hurdle is low enough that people who want to do it can, and the validators do actually bother to validate.

1 Like

I notice that EIP1559 already proposes to change the block hash format:

The datastructure that is passed into keccak256 to calculate the block hash is changing, and all applications that are validating blocks are valid or using the block hash to verify block contents will need to be adapted to support the new datastructe (one additional item). If you only take the block header bytes and hash them you should still correctly get a hash, but if you construct a block header from its constituent elements you will need to add in the new one at the end.

https://eips.ethereum.org/EIPS/eip-1559#block-hash-changing

I suppose it’s just possible that there may be some application for which this breakage doesn’t matter but substituting a random number would, ie their contract wants to validate some piece of data earlier in the hash, and expects the supply of the rest to be some arbitrary bytes.

However if it looks like EIP1559 is going forward this might be a good opportunity to tell people not to rely on reconstructing the block hash.

1 Like

I think this description is a little incomplete since the winning tip may change from time to time due to reorgs.

Isnt it more correct to say:

“When validator is meant to propose a beacon block it asks eth1-engine to create eth1 data. The call specifies the branch to build on.

?

The call has parent_hash param to specify the branch

1 Like

We have tested this solution,
It is one of the most promising!
120 M Gas limit -> ~~970 TPS (tx/s)

Networking (block synchro) is done only on eth2 side
Tx pool is shared in eth1 side.


We were not able to fill txpool constantly so we have used chainhammer + chaindriller


Chainhammer signes transactions on a node (unsecure expose of eth1) and uses contract calls
Chaindriller signes transactions in memory and this is simple tx from <-> to

More logs and details described in medium post

If anyone intrested you can spin it up in Google Cloud (GKE) from this repository: