Yes, initially I did not realize this was only for validators, not just general signature aggregation, which changes the situation, as you say.
Just fyi, Dan Boneh’s reference indicated using Wagner’s generalized birthday problem algorithm, which looks like L[1/2]. It’s slower than cracking RSA but not slow enough for these curve sizes.
Rust implementation for BLS sigs from Compact Multi-Signatures for Smaller Blockchains by Dan Boneh, Manu Drijvers, Gregory Neven. It supports single signature verification and aggregate signature verification. It uses Apache Milagro crypto library and uses the BLS12-381 curve.
Won’t the verification process need to verify every single signature prior to the aggregated signature verification? Otherwise, anyone could publish a faulty signature to fail the verification of the aggregated signature.
Won’t the verification process need to verify every single signature prior to the aggregated signature verification?
The onchain signature verification process is a single signature verification, but you are right that the process of aggregating signatures offchain (which is an implementation detail for clients, not strictly part of the consensus protocol) may require verifying more than one signature.
There’s an optimistic strategy that avoids verifying every single signature, as follows. Let’s assume there’s a leader (e.g. the next beacon proposer) that wants to aggregate up to 1,024 signatures. (Note that aggregation can be parallelised across several participants with more advanced strategies, e.g. using incremental aggregation.) That leader maintains semi-trusted network connections with peers that haven’t forwarded invalid individual signatures in the past, so he expects some fraction (say, 99.9%) of signatures he receives to be valid. In other words, within the ≤1,024 signatures to be aggregated, he’s expecting ~1 to be invalid.
The leader then aggregates all signature candidates and verifies the aggregate signature. If he got lucky and all signature candidates are valid (which should happen most of the time), then he aggregated the signatures with a single verification. If not, he starts a binary search to identify the 1+ invalid signature candidates. If there’s exactly 1 invalid signature, that involves ≤log(1,024) = 10 signature verifications. If there are exactly 2 invalid signatures, that involves ≤ 2*log(1,024) = 20 signature verifications, although in expectation it’s a bit less.
In the worst case every single signature the leader receives is invalid and he has to verify an unbounded number of signatures. But that type of DoS already exists in Ethereum today and (as I understand) is mitigated with semi-trusted peer connections.
I’m trying to understand why this assumption is justified. Is there slashing for publishing invalid individual signatures or any other disincentivization mechanism for such behaviour?
Since the verification of individual signatures is done off-chain, there should be some mechanism that allows a leader to “notify” the on-chain contract that a specific signature is invalid. The smart contract could then verify the claim (one signature verification) and slash the malicious actor.
How do you know who produced an invalid signature?
We don’t have plans for slashing conditions at the protocol level. I guess it could be implemented at the application layer in an opt-in basis, possibly dual-purposing the authentication of the p2p networking layer.
The main disincentivisation mechanism I know of is a primitive reputation system. The default relaying policy would be to verify signature candidates before relaying. So if a node (identified for example by its IP or its address as a validator) misbehaves it gets placed in blacklists and ostracised, and nodes that behave over long periods grow their reputation.
The above reputation system with heuristics can help determine the probability that an individual signature is valid. For example, a signature candidate that is received from multiple semi-trusted peers in the gossip network may be safe enough to optimistically aggregate. A signature candidate received from a single peer with a fresh IP may be treated suspiciously and verified individually.
Another idea is to leverage validator honesty at the networking layer. The protocol already assumes that 1/2 (or even 2/3) of the validators are honest, and the relaying policy can be enshrined as part of that. I think Dfinity will release their networking whitepaper soon. As I understand they will use bulletproofs to implement something like a ring signature to simultaneously make use of the honesty assumption and preserve validator privacy.
For reference I’ve compiled a list of implementations.
Check the ECDSA signing API exported by the library you are using. ECDSA signing can either require a cryptographically secure random integer or be deterministic depending on a private key and the message being signed. Further reading resources on ECDSA available in nim-milagro-crypto wiki . For Casper we need the deterministic ECDSA.
Single message signing: unfortunately Milagro does not include the deterministic ECDSA so you will have to implement it, however you can use @lovesh’s implementation as a base, Milagro includes all the primitives necessary to make it very short. Using the C or C++ implementation requires you to manage array arguments size and lifetimes.
The Nim aggregate signature implementation is still being debugged.
If you’re looking for resources to help you make sense of ECDSA or BLS multisignature, generator points G1 & G2 (G0 and G1 in the BLS short spec), extension fields FP/FQ, FP2, FP12, here are a couple resources that I find useful:
 Readings from nim-milagro-crypto wiki:
Layman’s Guide to Elliptic Curve Digital Signatures: you will find, examples, 2D graphs. Key points: normally elliptic curve would require floating point but by using modulus math we can transform all fractions to (big) integers.
Another way ECDSA signature may leak private keys is when k is generated by a faulty random number generator. Such a failure in random number generation caused users of Android Bitcoin Wallet to lose their funds in August 2013. To ensure that k is unique for each message one may bypass random number generation completely and generate deterministic signatures by deriving k from both the message and the private key.
To be read in parallel, switch to the other when stuck, it helps a lot:
BLS is a particular curve because it is pairing friendly. The key point is that what we call elliptic curve with modulus, is also called elliptic curve over FP (or FQ). FP is an extension field i.e. same operations/rules as normal math, except that everything is modulo the modulus.
And we also need equations of higher order (FP2 to FP12) which are solved using complex integer numbers.
The complex i is called “u” in the Zcash spec.
I’ve stumbled upon Threshold Cryptography and Distributed Key Generation by Orbs.com.
It goes over Elliptic Curve Crypto, BLS and then introduce Threshold BLS Signature Scheme with key generation distributed by an Ethereum smart contract (Researchers David Yakira, Ido Grayevsky, Avi Asayag)
We are using joint-Feldman protocol for DKG and BLS with the curve implemented in ethereum (alt_bn128 + optimal ATE pairing). Will be happy to share the code once it is ready.
Thanks for your work @lovesh, it is much appreciated.
Primarily, I have been working on the Rust code to make it more like a standard Rust crate (in my opinion). So far I have not been able to generate a test that makes it fail.
I am not a cryptographer, so I’m not sure whether or not this implementation is safe/sane. I would really love some feedback on that.
Just published BLS aggregation library in Java (based on milagro): https://github.com/ConsenSys/mikuli
One thing to note: If we assume that signature is a point in the group with smaller elements then the size of the signature is 96 bytes. The signature can be compressed to 48 bytes just by sending the x coordinate and calculating y on the receiving end, this however will take some time as calculating sqrt in finite field is expensive (see benchmarks in the repo).
Milagro provides point compression mechanism but the compressed point is 49 bytes (I think the additional byte is not needed as the x coordinates takes 381 bits so there is 3 bits left for book keeping). In the future we would like to replace milagro with another pairing library.