Minimal VDF randomness beacon

The permutation polynomials have been considered:

  1. Sloth: Only provides a constant gap between evaluation time and verification time. For it to be practical to build a Sloth VDF ASIC the multiplier needs to be of reasonable size (say, at most 4096 bits, yielding a ~2048 gap). This 2048 gap only applies if the evaluator and verifier run on the same hardware, making CPU verifiers useless in practice (unless you use intermediate values, but then bandwidth becomes an issue). This means you now need VDF ASICs for verifiers as well. You could have a cryptoeconomic scheme (e.g. see here) but a cryptographic scheme is clearly preferable (less trust, less complexity).
  2. MiMC: Basically the same as Sloth but worse: smaller gap for equivalently sized multipliers, and more complex intermediate non-arithmetic operations.
  3. Guralnick and Muller: I was told by various experts (Dan Boneh, Benedikt Bünz) that these polynomials are too risky because no one has seriously tried to break them. A promising approach, though.
  4. Sloth++ with SNARKs: Based on the research on ASIC multipliers my guess is that the basic step in Sloth++ (e.g. 256-bit modular squaring) can be done in ~1 nanosecond. So the SNARK prover stands no chance to keep up unless it is itself implemented as an ASIC. It’s unclear building a SNARK ASIC that is fast enough is practical.
  5. Sloth++ with STARKs: This is currently our best candidate for a quantum-secure VDF. The situation is similar to the above except 1) STARKs are quantum-secure, 2) STARKs provers are faster, 3) recursive STARKs are harder 4) STARKs are not as mature.

Is there a permutation polynomial you think should be (re)considered?

The EVM has no bearing on VDFs (other than the randomness opcode yielding hashes of VDF outputs). The VDF logic is implemented at the consensus layer.

8 Likes

Wow! what a thorough summary! Thanks a lot! I do not have any specific attack or proposition yet but really like Guralnick and Muller polynomials. I want to have a closer look at them in the next few weeks.

Can you illustrate how this is going to work even if Casper is added to the picture?

My understanding is that if there is a network split, two random numbers are generated.
Then these two numbers will cause different block proposers to be selected on two different sides of the split - correct? In this case I am not sure how Casper validators should behave. Casper validators assume that block proposals are made by valid proposer, but if there are two different sets of valid block proposers, I am not sure Casper validators can help to resolve it.

May be it is going to work, the problem will be that the system may end up so complex, that current ETH developers - the guys that develop geth, parity, cpp-ethereum etc. will not be able to understand it or debug it … If something goes wrong, they will have to analyze potential interplay of validators, proposers, network splits etc.

I am keeping my money on the PoW network!!!:joy::joy:

I suggest we take this discussion offline—feel free to DM me—as it is not specific to VDFs, or even randomness beacons.

It is natural for general-purpose consensus protocols to favour liveness. (In that sense, Dfinity got it “wrong”.) @naterush puts it eloquently:

An available consensus protocol (that is aware of what is safe) can simulate a consistent one (just by only showing finalized blocks), but not the other way around.

Casper FFG is that layer of detectable safety that dApps can rely upon (if required). In other words, a consensus protocol that favours availability is generic in the sense that it pushes the safety vs liveness tradeoff to the application layer:

  • Liveness is available out of the box for dApps that require liveness.
  • Safety is available by filtering out non-finalised blocks for dApps that require safety.

Casper validators should subjectively follow the fork choice rule.

3 Likes

You could fund the ASIC as a public good with liberal radicalism!

Why wouldn’t you use VDF with say, a blockhash N blocks ago? There are probably issues around using a public piece of data for an input to the VDF.

I guess it may be to do with that other proposers may jump in to someone else’s slot and use the public piece of data to compute the VDF, potentially more quickly the assigned proposer. I’m asking because Harbour MVP are looking to use this (with PoW for the mean time). FMI see here.

We want the beacon’s safety to fallback to RANDAO if the VDF is (partially or fully) broken. Blockhashes are a much weaker fallback than RANDAO. Also we want to launch the beacon chain in 2019 and the VDF ASICs will not be ready until 2020, so RANDAO is a bootstrapping mechanism.

2 Likes

If a bad guy can do VDF that is Amax times faster, will this give the bad guy plenty time to prepare a DDoS attack against committee member at the future epoch?

1 Like

The amount of time the randomness is known before it is used is called the “lookahead”. People running the commodity ASIC have one epoch of lookahead. An attacker with hardware that is A_{max} times faster than the commodity ASIC has A_{max} epochs of lookahead.

Adaptive attacks (e.g. DoS, bribing, …) on proposers and committees is largely orthogonal to the randomness beacon for two reasons:

  1. Hardening against adaptive attacks is done with “private elections” (and hence private lookahead). There are various schemes (e.g. see my proposed scheme using ring signatures) that work regardless of the randomness beacon.
  2. Adaptive attacks (especially networking DoS attacks) are possible even for randomness beacons with small public lookahead (e.g. Dfinity’s scheme).

PoW and Algorand’s cryptographic sortition are two private election schemes but neither is unbiasable.

3 Likes

Thanks for the great design and talk, Justin. I’m super curious about the design of the MPC for trustless construction of the RSA modulus. Is that described in more detail anywhere, at this point?

I think it’s worth pointing out, with Algorand, that while the last potential revealer can bias the result, there is no incentive for them to do so, and there will be many applications of random beacons where that is the case. It would only make sense for them to do so if the outcome they can predict from their last revelation would be unfavorable to them.

The Ligero team is developing and implementing the MPC. They will present it to various MPC experts on Feb 3 at “VDF day” (a research event organised by the Ethereum Foundation and Filecoin). They are working towards a formal description of the protocol with a proof of security.

while the last potential revealer can bias the result, there is no incentive for them to do so

The analysis for Algorand is similar to that of RANDAO. A last revealer has the incentive to bias the randomness (by not revealing) when that makes them the last revealer (with sufficiently high probability) for the next 2+ slots. An attacker can also somewhat manipulate the committee selection process to his advantage.

A last revealer has the incentive to bias the randomness (by not revealing) when that makes them the last revealer (with sufficiently high probability) for the next 2+ slots

You mean when he has the option to either be the last revealer of the last committee round, or the next leader? I guess it’s possible, if his stake is large enough, and depending on the reward structure.

This consideration in isolation would imply that the leader reward should be lower than the rewards for the last-round committee members!

The Ligero team is developing and implementing the MPC.

Can’t wait! Any hints on the MPC framework they’re using, or other aspects of their approach?

A large enough attacker can exploit the randomness to facilitate a takeover of the system with less total stake. Micro-incentivisation cannot fix this because there are significantly larger external incentives at play. (Think of attacks by a government, or someone who has a huge short position on Algorand.)

You’re right. I wonder whether they’ve included that in their analysis. It does seem like it would change the calculations in Appendix A of this paper.

Had a look at the videos from the VDF day mentioned above, but did not find any in-depth deep dive about the RSA MPC designed by the Ligero Team. Anyone could point me into the right direction? Some meaningful technical resource, please! cc @JustinDrake

Some of the key ideas are communicated in this talk by Abhi Shelat. Shoot me a message if you want a draft copy of the full RSA MPC protocol.

Hi, can I check that this is what Ethereum plans to use for its random beacon?

[Edit: delete]
[If so, since which RANDAOs are included in a block is censorable, to prevent a 1/3 attack, the threshold number of RANDAOs that need to be included in order for the entropy to be considered valid must exceed 1/3. Else, attackers can include only RANDAO they control if they are producing the block. Then, since attacker RANDAO holders can withhold their RANDAO reveal, for the protocol to suceed, at least 1/2 of the honest attackers need to be online.

But then this is the same liveness guarantees as in a 1/3+epsilon threshold BLS signature scheme.

Of course, this is not quantum secure, but it has far less latency than the RANDAO reveal + VDF protocol. Further, the VDFs currently used are not quantum secure, although a hash-chain based alternative verified with GPUs could be. Latency can be very bad for certain protocols like challenge-response protocols with timeouts.

Hence, my question is why one should go to the trouble of using RANDAOs if this is true.]

Edit: I see that using 128 slots in an epoch with proposer rotation is meant to circumvent censorship. Nevertheless, this introduces a possible VDF-reveal lag of up to (Amax-1)* 17 minutes. Even in the very ideal scenario where a VDF ASIC can reduce Amax to 1.25, that’s still 4 minutes 30 seconds of reveal lag.

Threshold BLS can put the reveal lag to a maximum of 1s even for thousands of validators. We only require 1/2 of honest validators to be online in the 1/3 + epsilon case, which is not far off from 1/4.

Yep, that’s right :slight_smile: RANDAO+VDF trades off low latency for strong liveness and strong unpredictability (i.e. liveness and unpredictability even in the face of a majority attacker). Applications that need high security and liveness and are happy with high latency will favour RANDAO+VDFs. These include L1 validator sampling for Eth2 as well as L2 sampling (e.g. rollup aggregator sampling, tBTC, lotteries, PoolTogether). Applications that critically need low latency (e.g. real-time gambling and gaming) are better served by threshold-based mechanisms.

An interesting open question is whether we can get a low-latency (say, 1 second latency) alternative to RANDAO to produce biasable entropy with strong liveness and strong unpredictability.

1 Like

Low latency is pretty important…
Considering eth is one of the largest networks on the planet…
It should be able to compute things faster than anyone else…
I’ve been thinking a lot about hashing and rehashing lately…
Couldn’t you take a hash and rehash with difficult parameters that no other computers could calculate before the final hash is produced?

such an idea is usually called hash-chain. unfortunately you will need to store all the hashing results in order to verify. then the space complexity goes O(n): it’s not scalable.