Open Research Questions For Phases 0 to 2

This definitely is the thing I’m concerned most about at the moment! Though I’m happy that there’s work being started on this in an eth1 context.

I feel like light clients are solved! The general pattern (know a block header at time T, use it to download a committee at time T, use that to verify signatures from time T + k, those signatures point to a block header at time T+k) has been known since 2015 ; even the concept of using committees instead of the whole validator set was known then (see “the light client can even probabilistically check the signatures, picking out a random 80 signers and requesting signatures for them specifically…”). The protocol for doing this in eth2 concretely is basically ready; the only thing still being worked on is a simplification improvement made possible by the latest design of committing to compact committee roots.

The thing that I think does need more eyes is the market between light clients and light client servers, and making sure that can work efficiently, usably (including first-time-joiner experience) and minimizing centralization risk.

Real-World Runtime and Other Costs

Benchmarking of the beacon chain is definitely being worked on, and I remember a result that clients can process a worst-case epoch transition within a single slot. I agree aggregation bandwidth is likely the biggest risk.

While the stateless client model certainly removes the need to do state reads/writes in order to process transactions, it does have substantially higher 1) bandwidth and 2) processing requirements (as many cryptographic hashes must be performed in order to validate Merkle proofs). The costs of this are also unknown.

Definitely not unknown! An implementation of binary tree multi-proofs has been made and benchmarked; the general conclusion is that it validates the heuristic that the length of a Merkle proof of K nodes in a N-node tree is k + k * log(\frac{N}{k}) hashes.

Finding Slashable Attestations

Agree! Though I’m not too worried about this because even a very inefficient and flawed implementation would likely be sufficient to ensure that slashed validators get caught. If violators never get caught, all that happens is that we’re back to an honest majority model.

It does not consider adaptable corruption of validators (unlike other protocols such as Algorand, though unsuccessfully) through bribing, potentially with Dark DAOs.

We have a mechanism to provide a backstop in the case of a corrupted committee, namely fraud proofs and data availability proofs. Here’s a draft PR for data availability proofs, which does need to be edited to take into account updates to the crosslink structure. Much of the discussion around crosslink data structure has been about preserving fraud proof friendliness.

Though the implementation-level work on making these things work has definitely not started yet!

Privacy Considerations

I agree! I feel like we know the answers in theory (ZK ZK Rollup, ZEXE…) though in practice the implementers are jiust starting to get their tech off the ground, eg. see recent work on mixers and auxilary infrastructure.

Formal Proofs and Justifications

Agree that we need more of this, and this is also something I’m disappointed we have not had more progress in!

4 Likes