Yes, initially I did not realize this was only for validators, not just general signature aggregation, which changes the situation, as you say.

# Pragmatic signature aggregation with BLS

**burdges**#22

Just fyi, Dan Bonehās reference indicated using Wagnerās generalized birthday problem algorithm, which looks like L[1/2]. Itās slower than cracking RSA but not slow enough for these curve sizes.

**lovesh**#23

Rust implementation for BLS sigs from Compact Multi-Signatures for Smaller Blockchains by Dan Boneh, Manu Drijvers, Gregory Neven. It supports single signature verification and aggregate signature verification. It uses Apache Milagro crypto library and uses the BLS12-381 curve.

**YaDavid**#24

Wonāt the verification process need to verify every single signature prior to the aggregated signature verification? Otherwise, anyone could publish a faulty signature to fail the verification of the aggregated signature.

**JustinDrake**#25

Wonāt the verification process need to verify every single signature prior to the aggregated signature verification?

The onchain signature verification process is a single signature verification, but you are right that the process of aggregating signatures offchain (which is an implementation detail for clients, not strictly part of the consensus protocol) may require verifying more than one signature.

Thereās an optimistic strategy that avoids verifying every single signature, as follows. Letās assume thereās a leader (e.g. the next beacon proposer) that wants to aggregate up to 1,024 signatures. (Note that aggregation can be parallelised across several participants with more advanced strategies, e.g. using incremental aggregation.) That leader maintains semi-trusted network connections with peers that havenāt forwarded invalid individual signatures in the past, so he expects some fraction (say, 99.9%) of signatures he receives to be valid. In other words, within the ā¤1,024 signatures to be aggregated, heās expecting ~1 to be invalid.

The leader then aggregates all signature candidates and verifies the aggregate signature. If he got lucky and all signature candidates are valid (which should happen most of the time), then he aggregated the signatures with a single verification. If not, he starts a binary search to identify the 1+ invalid signature candidates. If thereās exactly 1 invalid signature, that involves ā¤log(1,024) = 10 signature verifications. If there are exactly 2 invalid signatures, that involves ā¤ 2*log(1,024) = 20 signature verifications, although in expectation itās a bit less.

In the worst case every single signature the leader receives is invalid and he has to verify an unbounded number of signatures. But that type of DoS already exists in Ethereum today and (as I understand) is mitigated with semi-trusted peer connections.

**YaDavid**#26

Iām trying to understand why this assumption is justified. Is there slashing for publishing invalid individual signatures or any other disincentivization mechanism for such behaviour?

Since the verification of individual signatures is done off-chain, there should be some mechanism that allows a leader to ānotifyā the on-chain contract that a specific signature is invalid. The smart contract could then verify the claim (one signature verification) and slash the malicious actor.

**JustinDrake**#28

We donāt have plans for slashing conditions at the protocol level. I guess it could be implemented at the application layer in an opt-in basis, possibly dual-purposing the authentication of the p2p networking layer.

The main disincentivisation mechanism I know of is a primitive reputation system. The default relaying policy would be to verify signature candidates before relaying. So if a node (identified for example by its IP or its address as a validator) misbehaves it gets placed in blacklists and ostracised, and nodes that behave over long periods grow their reputation.

The above reputation system with heuristics can help determine the probability that an individual signature is valid. For example, a signature candidate that is received from multiple semi-trusted peers in the gossip network may be safe enough to optimistically aggregate. A signature candidate received from a single peer with a fresh IP may be treated suspiciously and verified individually.

Another idea is to leverage validator honesty at the networking layer. The protocol already assumes that 1/2 (or even 2/3) of the validators are honest, and the relaying policy can be enshrined as part of that. I think Dfinity will release their networking whitepaper soon. As I understand they will use bulletproofs to implement something like a ring signature to simultaneously make use of the honesty assumption and preserve validator privacy.