Hi, we’ve been recently doing some research on distributed randomness, want to share a RandHound-influenced protocol that has the properties from the title, would appreciate feedback.
(EDIT: the description below is fully rewritten based on some offline feedback)
(EDIT2: more formal latexified version can be found here: https://www.overleaf.com/read/pcrtmwpxvnkb)
n participants to do the following:
k = n*2/3where each element is a 256 bit random numbers, erasure codes them to have a vector
nshares such that any
kshares can reconstruct the chosen
knumbers, and encodes each of the
nshares with the public key of one of the partipants to get a vector
They then publish
es[j]. Here it’s important that nobody can recover
r[j]by just observing
Participants reach a consensus on a set
Sof at least
ipublishes decoded row of
kparticipants published the rows, everybody can reconstruct the
I now want participants for each
r[j]was sucessfully reconstructed to be able to reproduce the
es[j]and confirm that it matches the published
es[j]. If it matches, then all the participants were able to reconstruct
r[j], no matter what shares they observed. If a participant failed to reconstruct the erasure code or it didn’t match the
es[j], then all the paritcipants either failed to reconstruct it or reconstructed something that doesn’t match
S’ be the subset of
Sfor which the
rwas reconstructed. The output of the randomness beacon is the some function of the
Here there are some requirements to the public key ecnryption:
- Encryption needs to be determenistic, so that reconstructed
esin step 3 matches the published
esin step 1.
- If some
es[j][i]is gibberish (i.e. doesn’t decrypt or decrypts into something that is not equal to
es[j][i]after re-encrypting), it should be possible to prove it.
Seems like ElGamal in which the step
y=random() is replaced with
y=hash(input) works for (1) above, and Chaum-Pedersen proof works for (2) if a malicious actor still used some
y that was not equal to
Comparison to other schemes:
- This approach is naturally inferior to RANDAO+VDFs in that it has worse liveness and safety requirements, but VDFs have the known issues with the necessity to build ASICs (or fear that someone else will).
- It appears to be as good as threshold signatures (except that it requires
n^2network instead of
nfor threshold signatures IIRC) without requiring the expensive DKG step.
- Compared to RANDAO it is unbiasable
- Compared to RandShare it has lower complexity, compared to RandHound it is significantly simpler.
Feedback is appreciated.