Sticking to 8192 signatures per slot post-SSF: how and why

I have a question related to approach 1 (4096 validators) and how it relates to your sharding and DAS proposal post (and danksharding more generally).

Given that 4096 fits in a single committee size, would all the nodes be forced to download all the data? Or would a reed-solomon encoding scheme like eigenDA where all nodes download a fraction of the RS encoded chunks only be used?

Does @vbuterin usually follow up in these threads? Iā€™ve also got a bunch of other questions but donā€™t want to be screaming into the void.

Elliptic curve addition is a pretty basic primitive, and those core primitives have been studied for years. So the optimizations are likely to come from aggregation, likely via caching aggregate (an engineering problem).

That said there are interesting research in large-scale BLS aggregation, see Web3 Foundation Accountable Light Client Systems for Secure and Efficient Bridges ā€” Research at W3F

And also RecProofs from Lagrange Labs as aggregation in SNARKS is very very expensive: https://www.lagrange.dev/recproofs

So zkBridges team and coprocessor teams that want to prove Casper in ZK (zkCasper) likely have some interesting optimizations to reduce the amount of work to do.

Now this is something Iā€™m quite interested in and if people want benchmarks on various hardware, I can update the one I did for @asn for Devcon VI (Batch additions by mratsim Ā· Pull Request #207 Ā· mratsim/constantine Ā· GitHub)

Iā€™ll be happy to build any cryptographic optimizations into Constantine for actual measuremements.

On my Ryzen 7840U (low-power, 15W, 8 cores, laptop CPU), individual EC add with various coordinate system:

And with batch addition, serial and parallelized:

I think at 1.3ms, single-threaded, the cryptography is plenty fast, and the delay will be the aggregatorsā€™ topology and networking, see Signature Merging for Large-Scale Consensus

4 Likes