Pragmatic signature aggregation with BLS

Got a reference for that? I can’t find any info searching online.

2 Likes

Curious for some clarification, how does the legacy main chain (current chain) relate to the proposed beacon chain?

Dan Boneh didn’t file BLS patents, and is not aware of patents on BLS.

Sorry I don’t think it’s online yet. The info came from discussions with Eli Ben-Sasson. As I understand 80bK is the size StarkWare got for the equivalent of a single Zcash transaction. There are a couple of factors that may increase the size to the 100kB-200kB range for us:

  1. The size of a STARK scales roughly logarithmically with the length of the computation, and our aggregate signatures do more computation
  2. There’s a tradeoff between size and prover/verifier time, and it may be favourable to increase proof sizes a bit.

The beacon chain is a sidechain to legacy chain in the sense that:

  1. Beacon chain blocks contain a reference to a legacy chain block.
  2. If the legacy chain reorgs then the beacon chain must reorg accordingly.

At some point in the future the legacy chain can become a sidechain to the beacon chain.

Thats great that he explicitly says that

Rob and/or Alistair noticed this proposal uses only pk_i^{H(pk_i)} while the paper uses pk_i^{H(pk_i, \{ pk_1, ... pk_n \})}. In the paper, the security proof models H as a random oracle, so the version presented here sounds fine, assuming ROM, but…

Random oracles do not actually exist, so one normally implements them as if they were a PRF by giving them a nonce/key. This pk_1, ..., pk_n is that key.

Afaik, there should be no real performance penalty, or implementation issues, arising from adding the pk_1, ..., pk_n here. In principle, one could use another key instead, but most good alternatives do come with implementation headaches, ala using the most recent block or whatever.

this proposal uses only pk_i^{H(pk_i)} while the paper uses pk_i^{H(pk_i, \{ pk_1, ... pk_n \})}. In the paper, the security proof models H as a random oracle, so the version presented here sounds fine

Well spotted, and thanks for bringing this up!

I also thought that we didn’t need the pk_1, ..., pk_n in H. (Partly because the blog post prior to peer review didn’t include them.) It turns out—somewhat expectedly—that cutting corners on the reviewed paper was a bad idea. :joy: In an email conversation Dan Boneh writes: “there is a sub-exponential time attack on the resulting system, described on page 12 of the Maxwell et al. paper”.

As I see it adding the pk_1, ..., pk_n in H is a significant slowdown because the public key exponentiations have to be redone (i.e. cannot be cached). The good news is that we do not need the 2018 BLS paper at all, i.e. it suffices to do plain BLS aggregation with proofs-of-possession at registration to address the rogue public key attack.

As I understand, a proof-of-possession is simply the BLS signature of the public key, but the hash function for the proof-of-possession must be different to the one used for signing messages. Using proofs-of-possession has the added advantage that the aggregation scheme is even simpler than the one presented in the original post.

In fact, the blog post does use pk_1, \ldots, pk_n when it writes t_1, \ldots, t_n \leftarrow H(pk_1, \ldots, pk_n). It just folds the pk_i into the list and distinguishes by output stream location.

I’m surprised about there being an attack myself, but maybe I have not read the paper closely enough. Afaik, there is no way to avoid the public key exponentiation anyways since the versifier must check that too anyways.

Is there a reference for this proof-of-possession thing? It’s just adding the BLS signature on themselves? I’d need to think about the attack when using the same hash function.

the blog post does use pk_1, \ldots, pk_n when it writes t_1, \ldots, t_n \leftarrow H(pk_1, \ldots, pk_n)

Oh right! I miss-read that :slight_smile:

Afaik, there is no way to avoid the public key exponentiation anyways since the versifier must check that too anyways.

Yes but it would be a one-time cost, and the verification could have been done at registration by the blockchain at no cost to verifiers.

Is there a reference for this proof-of-possession thing? It’s just adding the BLS signature on themselves? I’d need to think about the attack when using the same hash function.

See this paper. On page 4 it states: “We show that the standardized POP mechanism described above, when applied to these schemes, does not lead to secure multisignatures. Both schemes fall to rogue-key attacks despite the use of the standardized POPs. We present a straightforward and natural fix for this problem: simply use separate hash functions for POPs and multisignatures.”

Thanks for the reference!

I think this depends what you means by blockchain: If you mean the whole network, then yes but if accounts are single use then amortized this gives the same cost. If you means some smallish set, then no because an adversary could corrupt that entire set, and then submit false transactions.

I figured it out, probably. We already can aggregate the proofs of possession because they are on different messages, so this should all be fine in the end.

Aggregation over different messages is not as efficient because you need one pairing per message, but that’s better than two pairings per message.

Actually, one exponentiation per key for Dan’s scheme should be dramatically faster though.

I’m also wondering if the exponents need to be big. In Dan’s paper, they must be big because of the reduction to co-CDH, but intuitively 128 bits sounds sufficient so one wonders.

I don’t understand the concern of your last two replies. Are you worried about the costs of registering a new BLS public key?

Accounts are not at all single use. Keep in mind that the deregistration period of a validator will be ~4 months, and that every validator is invited to make a signature every ~5 second for attestations. So validator accounts will likely be making millions of BLS signatures in their life time with the same public key.

I don’t understand this sentence. :slight_smile: Set of what? What false transactions?

Right, we could aggregate proofs of possession for ~2x reduced verification costs. But is the extra complexity worth it? Without aggregation each proof of possession takes ~5ms to verify so even assuming conservatively 100,000 registrations per month that’s ~4 minutes of verification time saved per month.

The one-time proof of possession might be slower to verify than an exponential, but then every single signature (of which there are millions per public key) would be faster to verify.

I had not quite understood if registration was even the right model.

I see. We’re not even talking about account keys, but validator’s signatures. In that case, there isn’t so much difference between the threat models anyways, so probably fine.

I’m asking about corrupting the entire validator set, or maybe just 2/3rds, entering rogue keys for large accounts that rarely move, and much later stealing the balances form the target accounts. It’s kinda the long-term double spend attacks, but with a simpler payoff and an arbitrary delay that likely increases its viability.

You’re not talking about aggregating transaction signatures though, and doing so must deal with different messages anyways, while only validator signatures cover the same message.

I’m confused here. How is entering rogue keys for other accounts even possible if you have to make a proof of possession at time of registration?

Rogue keys are not possible under the assumption that registration happened correctly. I’m pointing out that assumption can be violated more easily than standard cryptographic assumptions. In particular, a correct registration assumption might hold for one proof-of-stake scheme but cause problems for another one that handles the economic threats differently.

Ah, I see. I think in general registration is an unavoidable part of all of the kinds of deposit-based PoS algorithms we are using, because a signature is not even valid in a beacon chain unless the validator that made the signature has already sent a deposit transaction and been inducted into that beacon chain. So it’s totally ok to assume that registration happened correctly in our case.

Yes, initially I did not realize this was only for validators, not just general signature aggregation, which changes the situation, as you say.

Just fyi, Dan Boneh’s reference indicated using Wagner’s generalized birthday problem algorithm, which looks like L[1/2]. It’s slower than cracking RSA but not slow enough for these curve sizes.

Rust implementation for BLS sigs from Compact Multi-Signatures for Smaller Blockchains by Dan Boneh, Manu Drijvers, Gregory Neven. It supports single signature verification and aggregate signature verification. It uses Apache Milagro crypto library and uses the BLS12-381 curve.

1 Like

Won’t the verification process need to verify every single signature prior to the aggregated signature verification? Otherwise, anyone could publish a faulty signature to fail the verification of the aggregated signature.

Won’t the verification process need to verify every single signature prior to the aggregated signature verification?

The onchain signature verification process is a single signature verification, but you are right that the process of aggregating signatures offchain (which is an implementation detail for clients, not strictly part of the consensus protocol) may require verifying more than one signature.

There’s an optimistic strategy that avoids verifying every single signature, as follows. Let’s assume there’s a leader (e.g. the next beacon proposer) that wants to aggregate up to 1,024 signatures. (Note that aggregation can be parallelised across several participants with more advanced strategies, e.g. using incremental aggregation.) That leader maintains semi-trusted network connections with peers that haven’t forwarded invalid individual signatures in the past, so he expects some fraction (say, 99.9%) of signatures he receives to be valid. In other words, within the ≤1,024 signatures to be aggregated, he’s expecting ~1 to be invalid.

The leader then aggregates all signature candidates and verifies the aggregate signature. If he got lucky and all signature candidates are valid (which should happen most of the time), then he aggregated the signatures with a single verification. If not, he starts a binary search to identify the 1+ invalid signature candidates. If there’s exactly 1 invalid signature, that involves ≤log(1,024) = 10 signature verifications. If there are exactly 2 invalid signatures, that involves ≤ 2*log(1,024) = 20 signature verifications, although in expectation it’s a bit less.

In the worst case every single signature the leader receives is invalid and he has to verify an unbounded number of signatures. But that type of DoS already exists in Ethereum today and (as I understand) is mitigated with semi-trusted peer connections.