Minimal VDF randomness beacon

The amount of time the randomness is known before it is used is called the “lookahead”. People running the commodity ASIC have one epoch of lookahead. An attacker with hardware that is A_{max} times faster than the commodity ASIC has A_{max} epochs of lookahead.

Adaptive attacks (e.g. DoS, bribing, …) on proposers and committees is largely orthogonal to the randomness beacon for two reasons:

  1. Hardening against adaptive attacks is done with “private elections” (and hence private lookahead). There are various schemes (e.g. see my proposed scheme using ring signatures) that work regardless of the randomness beacon.
  2. Adaptive attacks (especially networking DoS attacks) are possible even for randomness beacons with small public lookahead (e.g. Dfinity’s scheme).

PoW and Algorand’s cryptographic sortition are two private election schemes but neither is unbiasable.

3 Likes

Thanks for the great design and talk, Justin. I’m super curious about the design of the MPC for trustless construction of the RSA modulus. Is that described in more detail anywhere, at this point?

I think it’s worth pointing out, with Algorand, that while the last potential revealer can bias the result, there is no incentive for them to do so, and there will be many applications of random beacons where that is the case. It would only make sense for them to do so if the outcome they can predict from their last revelation would be unfavorable to them.

The Ligero team is developing and implementing the MPC. They will present it to various MPC experts on Feb 3 at “VDF day” (a research event organised by the Ethereum Foundation and Filecoin). They are working towards a formal description of the protocol with a proof of security.

while the last potential revealer can bias the result, there is no incentive for them to do so

The analysis for Algorand is similar to that of RANDAO. A last revealer has the incentive to bias the randomness (by not revealing) when that makes them the last revealer (with sufficiently high probability) for the next 2+ slots. An attacker can also somewhat manipulate the committee selection process to his advantage.

A last revealer has the incentive to bias the randomness (by not revealing) when that makes them the last revealer (with sufficiently high probability) for the next 2+ slots

You mean when he has the option to either be the last revealer of the last committee round, or the next leader? I guess it’s possible, if his stake is large enough, and depending on the reward structure.

This consideration in isolation would imply that the leader reward should be lower than the rewards for the last-round committee members!

The Ligero team is developing and implementing the MPC.

Can’t wait! Any hints on the MPC framework they’re using, or other aspects of their approach?

A large enough attacker can exploit the randomness to facilitate a takeover of the system with less total stake. Micro-incentivisation cannot fix this because there are significantly larger external incentives at play. (Think of attacks by a government, or someone who has a huge short position on Algorand.)

You’re right. I wonder whether they’ve included that in their analysis. It does seem like it would change the calculations in Appendix A of this paper.

Had a look at the videos from the VDF day mentioned above, but did not find any in-depth deep dive about the RSA MPC designed by the Ligero Team. Anyone could point me into the right direction? Some meaningful technical resource, please! cc @JustinDrake

Some of the key ideas are communicated in this talk by Abhi Shelat. Shoot me a message if you want a draft copy of the full RSA MPC protocol.

Hi, can I check that this is what Ethereum plans to use for its random beacon?

[Edit: delete]
[If so, since which RANDAOs are included in a block is censorable, to prevent a 1/3 attack, the threshold number of RANDAOs that need to be included in order for the entropy to be considered valid must exceed 1/3. Else, attackers can include only RANDAO they control if they are producing the block. Then, since attacker RANDAO holders can withhold their RANDAO reveal, for the protocol to suceed, at least 1/2 of the honest attackers need to be online.

But then this is the same liveness guarantees as in a 1/3+epsilon threshold BLS signature scheme.

Of course, this is not quantum secure, but it has far less latency than the RANDAO reveal + VDF protocol. Further, the VDFs currently used are not quantum secure, although a hash-chain based alternative verified with GPUs could be. Latency can be very bad for certain protocols like challenge-response protocols with timeouts.

Hence, my question is why one should go to the trouble of using RANDAOs if this is true.]

Edit: I see that using 128 slots in an epoch with proposer rotation is meant to circumvent censorship. Nevertheless, this introduces a possible VDF-reveal lag of up to (Amax-1)* 17 minutes. Even in the very ideal scenario where a VDF ASIC can reduce Amax to 1.25, that’s still 4 minutes 30 seconds of reveal lag.

Threshold BLS can put the reveal lag to a maximum of 1s even for thousands of validators. We only require 1/2 of honest validators to be online in the 1/3 + epsilon case, which is not far off from 1/4.

Yep, that’s right :slight_smile: RANDAO+VDF trades off low latency for strong liveness and strong unpredictability (i.e. liveness and unpredictability even in the face of a majority attacker). Applications that need high security and liveness and are happy with high latency will favour RANDAO+VDFs. These include L1 validator sampling for Eth2 as well as L2 sampling (e.g. rollup aggregator sampling, tBTC, lotteries, PoolTogether). Applications that critically need low latency (e.g. real-time gambling and gaming) are better served by threshold-based mechanisms.

An interesting open question is whether we can get a low-latency (say, 1 second latency) alternative to RANDAO to produce biasable entropy with strong liveness and strong unpredictability.

1 Like

Low latency is pretty important…
Considering eth is one of the largest networks on the planet…
It should be able to compute things faster than anyone else…
I’ve been thinking a lot about hashing and rehashing lately…
Couldn’t you take a hash and rehash with difficult parameters that no other computers could calculate before the final hash is produced?

such an idea is usually called hash-chain. unfortunately you will need to store all the hashing results in order to verify. then the space complexity goes O(n): it’s not scalable.

Can’t you just do a simple hash-chain like:
Transaction id of RNG request (which is stored anyway) + Time it takes to ping the transaction to 256 decimals?
Store the average ping times for nodes…
To check for malicious activity?

In my opinion, maybe you just overlook such a process. I was actually talking about the internal results. You cannot just use last hash result and re-hash it again, otherwise people can pre-compute.

That is, you will need extra entropy. Then you will need to deal with the cases that people (can be miners or provers, etc., depending on how you design the system) manipulate and bias the results.

VDF is good, but it can still be improved. My friend is working on such a design but I am not sure whether I am permitted to disclose more details.

I am also sorry for my misleading words: I mention “all the hashing results… the space complexity goes O(n)… internal results…”, but those are actually related to his context. Sorry to confuse you.

are you suggesting
randomness_result = hash(tx_id || ping_time) ?
how do you deal with the collision? and how do you verify the ping_time?

Say we have ping time down to like 100 decimals…
You have 5-10 people watch 3-4 address’ at the same time…
Then you ping them all at the same time
And then you take all the results and make a number…
Maybe someone could figure out how to manipulate the first few decimals to milliseconds…
But not nano seconds…

People other than you can not verify this result. These data are sampled by you, so people cannot trust that these data are inputted honestedly by you.

and if it’s not for providing public randomness service (i.e., you don’t need public verifiability and you could just trust yourself), then what’s the point of using randomness beacon? you could just generate a randomness locally and then record it on chain.

But you’d need to manipulate the tx id as well…
I’d love to know how it can be improved…

What is the best/cheapest way to provide an authentic random number?

  • show your workings?