Verifiable delay functions and attacks

random-number-generator
signature-aggregation

#1

Verifiable delay functions have been popular recently as a strategy for getting entropy for validator and committee selection in PoS networks. A VDF is a function that takes some medium-large quantity of non-parallelizable work to compute, but can be verified very quickly.

The best known “proto-VDF” is the “iterated modular square root” strategy, where f(x) = g(g(g(g(....g(x)....)))) where g(x) = xor(x^((p+1)/4), 1) mod p. Because each application of g depends on the result of the previous, the computation cannot be parallelized. Furthermore, g has an inverse: h(x) = xor(x, 1)^2 mod p, and this inverse can be computed more quickly (think ~100x more quickly) than g. Hence, the function f can be computed in the backward direction ~100x more quickly than in the forward direction, allowing a solution to f that takes ~5 seconds to compute to take ~0.05 seconds to verify. However, more recently there have been VDFs with much stronger properties, that can be verified almost instantly.

VDFs have the following advantages:

  • Relative to RANDAO and similar schemes, they cannot be manipulated.
  • Relative to BLS threshold signatures and similar VRFs, they do not depend on any specific fraction of nodes to be online, and do not require a complicated setup procedure.

Now, enter reality. It seems very plausible that there will be one actor who manages to create ASICs for any given VDF, and be N times faster (eg. N = 20) than the top-of-the-line CPU/GPU implementations. The acceleration factor won’t be remotely close to as high as it is for Bitcoin proof of work, because a large portion of the speedup factor in that case comes from parallelization, but one absolutely can imagine ASICs that involve circuits specifically designed to loop back into themselves as fast as possible.

Suppose that this happens. There are two possibilities:

  1. The attacker can compute the VDF so quickly that they can predict its output before they have to commit to some value, giving them the ability to choose the result from a set of possible outputs.
  2. The attacker can allow the difficulty adjustment process to adjust to their presence, then suddenly go offline, greatly slowing down the system.

Suppose that we use a VDF as follows. A participant is expected to submit data that determines the source data for a VDF computation, together with other source data that was revealed at time T, and the submission must appear before time T+D (eg. if the participant waits longer than T+D, then they will lose their chance to get included into the canonical chain). The data starts being used at time T+W. Suppose that the difficulty adjustment algorithm is designed in such a way that it targets the data being computed at time T+N.

Let the attacker speedup (“advantage”) over the rest of the network be A. The attacker can perform the first attack if A > N/D, and the second attack if A > W/N. Hence, we can secure against the highest possible A if we set N/D=W/N, ie. N is the geometric mean of D and W (eg. if D = 6 seconds, W = 1 hour, then N ~= 147 sec, allowing the mechanism to resist an attacker advantage up to sqrt(W/D) ~= 24.5.

You may ask: why not have the VDF difficulty adjustment target N to be close to W, and then have a backstop where if the VDF is not calculated by time W, a committee can approve a backstop that allows an easier VDF solution to be used? However, this is problematic: if you are the fastest VDF producer, then you can check the fast and slow solutions, and then choose which one to use by going offline, manipulating the randomness for free. It’s an open problem to see if we can achieve full safety against attacker advantages higher than sqrt(W/D) as above.


VDF-based RNG with linear lookahead
Verifiable delay functions and RANDAO manipulability
#2

What if we use successive portions of the VDF to seed things over time. This can be a solution to (1) if the randomness is needed to seed a series of discreet events (like who is to be the block producer for a series of blocks).

Specifically in the case of the beacon chain, the VDF can be used throughout the epoch to seed things just in time. At its most granular, seed block-1 producer/attesters of the epoch with VDF(seed, 1 * X / EPOCH_LENGTH). Seed block-2 producer/attesters with VDF(seed, 2 * X / EPOCH_LENGTH), and block-N of the epoch with VDF(seed, N * X / EPOCH_LENGTH) where X is the difficulty of the VDF targeting taking approx an entire epoch to compute.

This extends the sponge metaphor given by @JustinDrake. Instead of just squeezing out the one source of randomness for the epoch, validators must squeeze out randomness throughout the epoch. A validator with N=20 speed up would be able to see a bit into the future but would be limited in how far into the epoch they can see.

This might work for proposers/attesters but would not work for shard notary committees. Those committees have to be specified well ahead of time so they can divide up and do work on their specific shard.


#3

I’m not sure D is important. As long as the entropy submissions are VRFs of some sort, like hash(sk, height) with a zero knowledge proof, the attacker can’t start grinding the VDF until he knows the submissions of all uncooperating validators, right?

We could make D 24 hours to give plenty of time for submissions, and suppose that an attacker near the end of an epoch is capable of DOSing the k validators before them, plus any after them. Then the attacker needs A > W / (k * BlockTime).

Edit: I guess your premise was that entropy contributions can be made at any time in an epoch, and most validators would submit near the epoch start? I’m assuming submissions are interspersed throughout an epoch, e.g. by having each validator include a submission in their block header.


#4

the attacker can’t start grinding the VDF until he knows the submissions of all uncooperating validators, right?

Yes, but I’m assuming here that the attacker is the last one to submit a message that has the ability to influence the VDF result. So the attacker has the information needed to compute the result with or without their submission, and if they can do that quickly enough they would be able to choose the one of the two that is more favorable to them.


#5

Is the difficulty adjustment algorithm in VDF different than the one in PoW? I imagine it should not be since in VDF scheme everyone submits instead of only the fastest one as in PoW.
If we use PoW difficulty adjustment algorithm in VDF(i.e., to limit the fastest participant), participants would start to fail to reveal in time as attackers push up the difficulty and I suppose this is observable hence we could act on this in time to prevent the difficulty from climbing up?


#6

The difficulty would be fairly simple: if the VDF solution is submitted earlier than expected, increase the difficulty, if it’s submitted later than expected, decrease the difficulty. The adjustment could be fairly rapid.


#7

So attackers should not be able to manipulate the difficulty unless the attacker owns the majority of the participants right? If so then I think the second attack will be difficult to be carried out under honest majority assumption?


#8

It’s not about the majority of the participants, it’s about who is the fastest participant. If an attacker is the fastest participant, then they can manipulate the difficulty somewhat.


#9

It may be possible to construct memory-hard VDFs to mitigate (to some degree) ASIC acceleration. A simple one off the top of my head is a memory-hard but “weak” password-hash function h() whose preimage (assume it is unique for the time being) can be computed with a reasonable amount of time; so the VDF is y=h^{-n}(x), and verification is x=h^n(y). Apparently, computing the hash chain reversely takes much more time.


#10

A few comments:
1 - The second attack is not deniable. I mean, contrary to ASIC in mining nowadays, people will see that someone has a special hardware somewhere. So perhaps N should be closer to W.
2 - If various (non-cooperative) attackers develop an ASIC, I think the system stabilizes again. Does that make sense?
3 - If I understand well the threat model, there needs to be only one honest party with a fast hardware to secure the system. In this case, crazy superclocking a good CPU with well written code could make it very hard for an adversary to reach a 25 fold improvement with an ASIC. Actually, even developing an inhouse hardware is an option, like Siacoin did.


#11

Why use only a single VDF function? If we let multiple actors choose their function (from a set), as long as there is a single function the attacker cannot break by the reveal time, he cannot manipulate the randomness.


#12

Right, the fact that an attacker would “reveal his hand” by making a DoS attack is the basis for the DoS-hardened difficulty mechanism.

Right. There needs to be at least one honest party with fast enough hardware (specifically, no slower than A times what an attacker can do).

This is the current plan. We want to build a state-of-the-art commodity VDF ASIC—in collaboration with Filecoin and others—to get a reasonable maximum advantage A. The ASIC would be optimised for squaring modulo a fixed 2048-bit modulus (this is the RSA setup for the Wesolowski VDF).

We are considering outsourcing the hardware design and manufacturing to Obelisk (see their launchpad service), the same company that did the Sia hardware.