"Collective Coin Flipping" CSPRNG

The main issue with using a beacon based on threshold cryptography is that such a beacon is susceptible to a 51% attack. In addition to that, BLS signatures are not quantum resistant due to their reliance on elliptic curve pairings.

1 Like

I thought that we have 1/2 (or even 2/3) honesty assumption in Ethereum anyway? Do you know what could happen in the case of the 51% attack, I guess it still wouldn’t be easy/possible to do things possible in 51% attacks on PoW chains?

I believe this will be a valid concern in 10-20 years from now?

For a random beacon, we need an honest minority assumption. Otherwise, using threshold cryptography as a random beacon, an attacker can very easily influence the outcome of the beacon.
That is why in the current design that @JustinDrake is considering, we are using VDFs. There is also an alternate design called the leaderless k-of-n beacon by @JustinDrake (Leaderless k-of-n random beacon) that looks very similar to the Dfinity random beacon that uses threshold cryptography. The issue with this design is that it would need quantum resistant cryptography in order to have the same properties as the Dfinity beacon.

The goal is to a solution that would work in the long-term. We are currently seeing quantum computers (even though they can’t do much yet) come into existence. So, we would still need to design a beacon that can last more than 20 years.

1 Like

Thanks for the great responses on this thread.

I’m confused about the statement “an attacker can very easily influence the outcome of the beacon”. As I understand (at least in Dfinity) the random number is a sequence, i.e you sign the last random number and the hash of the signature is the new random number. If an attacker owned all signatures, isn’t it true that they could only predict future randomness rather than influence the outcome? I guess, if the group signature could be changed by aggregating different single shares then it could be influenced, but I’ve not looked into if that is the case or not.

Have I understood this correctly?

1 Like

I think you right.

In my head, the word influence has several meanings. I guess the meaning that I wrote in that post is not completely accurate.

However, if the attacker did own a significant portion of the signatures (>50%), I think they would indeed have the power to influence the beacon itself since they get to choose the random number and can obviously determine the hash of the next random number.

BLS-based threshold signatures provide a very efficient for a common coin. It is secure and fully asynchronous.

Note that you cannot just say that a particular cryptosystem is secure without a particular definition. BLS-based threshold signatures are efficient and are fully asynchronous but as explained above they are susceptible to 51% attacks. The fact that they are susceptible to 51% attacks is why @vbuterin and @JustinDrake are not using them for the randomness beacon in Ethereum.

1 Like

Can you explain how the algorithm above are not susceptible to 51% attacks ?)) I thought if more than half of your network are bad guys you are always screwed and math does not help …

1 Like

Do you mean the RANDAO+VDF scheme that is being considered? It turns out, only 1 person needs to properly compute the VDF on the generated value from RANDAO. So, a honest minority/single honest party assumption is used. Then, a malicious actor can only attempt to manipulate by refusing to reveal the output. But, I think there are issues before the VDF phase of the beacon, namely the RANDAO phase. A lot the issues that RANDAO has still are still present. I think an adversary can attempt to mess up the RANDAO phase before the VDF phase.

So, I think the Ethereum Foundation thinks that RANDAO+VDF has much better tradeoffs than beacons based on threshold cryptography.

In my humble opinion, the use of VDFs, even though there are bounds on how much you can speed the computation up with ASICs, feels like a step backwards. There has been so much innovation in getting PoS to where it is now, just for us to add a component that could potentially backfire. I think VDFs are this really cool and interesting cryptographic primitive but I wouldn’t want VDFs to be a part of the protocol.

Are you sure RANDAO is safe with an honest minority assumption? If we have e.g. 1M nodes network with a malicious majority, they can refuse to reveal majority of “XORing inputs”, i.e. choose to reveal them one by one, trying to get the value that suits them best (they’ll have 500k+ tries)?

Exactly.

Wrote about that above, glad you agree with me. :slight_smile:

I would still really like to know how a 51% attack in Dfinity/threshold crypto-like random value generation process would look like/what would the consequences be? I looked online and couldn’t find anything. Anyone?

The general idea in the “Collective Coin Flipping” paper is also known as “low-influence functions”. Another paper on the topic by @iddo and others is this one. A downside of low-influence functions (e.g. see this post by @cleasage) is worsened collusion resistance.

Dfinity’s threshold scheme (which I understand Polkadot also intends to use) is quite cool. The main downside is liveness: if a significant portion of the network goes offline then the randomness beacon—and the whole blockchain!—stalls. (The more minor issues are quantum insecurity, a complex secret sharing setup, and adaptive attacks on infrequently-shuffled committees.) One of the design goals of Ethereum is to survive WW3. Even if 90% of nodes go offline we want the network to continue running.

From a consensus perspective, I think @vbuterin is relatively confident that the Ethereum 2.0 protocol can be made robust enough to run on RANDAO despite it being a “weak” source of entropy. I think the best-case scenario of running on pure RANDAO is that the relevant design parameters—honesty assumptions, committee sizes, committee thresholds, etc.—are worsened to take into account the biasability of RANDAO.

A few remarks on the VDF approach:

  1. It is the only known approach that is simultaneously un-biasable and un-stoppable (under a minority liveness assumption).
  2. As pointed by @dlubarov the VDF strictly improves the RANDAO beacon. Even if the VDF is completely broken—i.e. outputs can be computed instantly—we fall back to the underlying RANDAO. As such, the potential for backfiring is limited.
  3. Part of the value of a VDF-based randomness beacon is to expose strong randomness to dApps via an opcode. Replicating this critical piece of infrastructure at L2 without L1 support (such as VDF rewards and forced RANDAO participation) is hard/impossible.
4 Likes

I may not have been clear. The VDF phase works with an honest minority assumption.

According to the randao analysis post by @vbuterin, if you select a large enough committee size for RANDAO, you mitigate a lot of these issues.

I think the main thing here to look at is in what circumstances did the Dfinity team design their beacon. If you do some reading in their first white paper, you will notice that the protocol can halt in the case of a 51% attack (or any attack, if I recall correctly). Also, they rely on the honest majority assumption for the security of the network against 51% attacks (like most blockchains do).

In the previous replies, I might have come off as saying the only issue with the Dfinity beacon is its susceptibility to 51% attacks. But, as far as I know, all the most commonly used blockchains are susceptible to 51% attacks as well (and for Bitcoin, 33% attacks). Unless Ethereum implements the 99% consensus algorithm, in the short term, it won’t be free from 51% attacks.

Essentially, what this comes down to, it what are the tradeoffs that the Ethereum protocol designers want for the beacon and the network? BLS signatures might be expensive to do on-chain,etc but they are more proven in the wild than other beacons out there. But, maybe the protocol designers want to be able to have the beacon run on chain,etc.

All in all, all of these questions are very important to answer to that we can come up with a beacon that works for Ethereum

Thanks, I’ve skimmed through this paper some time ago but didn’t notice the “coin flipping” part. Now when I’ve found that section, I can see that they discarded functions proposed by Russell and Zuckerman (I can’t quite understand their reasons at the first glance) and tried to offer a few alternative choices. To analyze those alternatives, they “focus on the prominent case of analyzing the probability that the last stakeholder in the chain can choose herself again as one of the first possible stakeholders of the next round (denote this probability by µ)”, and then calculate that µ. I don’t think this is enough. Will have to read and think about it.

@clesaege gives an example of a 20-out-of-100 corrupt coalition and argues that “the low influence functions can be attacked easily by a minority of participants if they can coordinate”. My impression was that one of the main advantages of these functions is that they (if designed properly) can NOT be attacked this way (or at least not easily). :slight_smile: The 2nd paper I mentioned at the end of my original post (“Lower Bounds for Leader Election and Collective Coin-Flipping in the Perfect Information Model”) defines bounds for protocols resilient against corrupt coalitions of linear size. Everything is happening in the full (perfect)-information model (all communication is by broadcast and corrupt players are computationally unbounded). The 1st paper I posted (“Random Selection with an Adversarial Majority”) describes a protocol resistant even against dishonest majority attacks (under certain conditions, of course). I’ll have to look more into this, but I highly doubt that this would be such a popular idea/model in academia if it couldn’t resist 20-out-of-100 collusion attack (especially having in mind it assumes the full-information model under which such attacks are more than expected).

Can you maybe share where did you get this info from? In every Polkadot presentation I’ve seen (even the most recent ones), they’re mentioning the coin flipping as the method they’ll probably use?

I’m a big proponent of favoring consistency over availability (one of the reasons I also like Tendermint), so I’m biased here. :slight_smile: I’ve recently listened to a podcast where @djrtwo explains that if a partition last for more than 12 days, the Ethereum will split forever in two (or more) separate chains (because we favor availability/liveness). I don’t like this design, but again, that’s just my opinion/preference. :slight_smile:

This is 100% correct. However, I’m afraid that VDFs could hurt decentralization and cause some sort of “social backfire”. The former because validators will have to buy “exotic hardware” (we could have all sorts of manipulations here, especially if we have one or a few Bitmain-type manufacturers) and the later is kind of hard for me to explain… :slight_smile: A number of people I’ve spoken to about specialized VDF hardware have a bad “gut feeling” about it. Of course, we should not based any of our decisions on anyone’s gut feeling :slight_smile: but I think it could create some sort of dark, evil ASIC-karma around our wonderful, positive Ethereum community. :slight_smile:

Also, as I’ve said before, the whole Ethereum team (including you) is neck-deep working on Eth 2.0, and I’m not sure can/should we involve ourselves even in hardware manufacturing now?

Did you mean to post this in a different post?

I wonder why you’re asking that, I just quoted and addressed a few things that @JustinDrake mentioned in his comment (which is right above your previous comment)? Or maybe I’m going mad and seeing comments which aren’t there? :see_no_evil::joy:

It was a private discussion with Fredrik Harrysson (Parity CTO) early July in Berlin. It sounds like plans have changed since then.

This is incorrect. Validators do not have to be VDF evaluators. Also, VDF evaluators do not need to be validators. That is, the two roles are completely decoupled.

VDF ASICs are radically different to mining ASICs in pretty much all respects. I hope education will allow the community to be well informed regardless of bad feelings about mining ASICs. A few points worth mentioning:

  1. Sequential vs parallel: VDFs are inherently sequential, i.e. massive parallelism does not help at all (the exact opposite to PoW mining). @djrtwo wrote a post about this. To give you an idea, the optimal parallel time circuit for modular multiplication (the basis for the RSA-based VDF we are considering) likely only uses a few mm^2 of die area (e.g. 3 mm^2).
  2. Power consumption: Current estimates suggest that a VDF evaluation would consume about 10 Watts. Assuming we have 10,000 VDF evaluations at any point in time (I am expecting only ~1,000, but let’s be conservative) that would amount to 0.1 MW. Compare this to today’s Ethereum mining which is about 2.3 GW (23,000 times more power intensive).
  3. Security margins: From the point of view of the VDF-based randomness beacon, the protocol can bake in a conservative security margin in terms of the speed advantage an attacker can have without getting any influence over the randomness. This is the A_{max} parameter defined here, and it will be carefully chosen.
  4. In-protocol rewards: For the VDF-based randomness beacon to function smoothly, I estimate in-protocol rewards to be ~$5K per day. Over a decade that corresponds to $18.5m. This is too low for a rational actor to build a somewhat faster proprietary ASIC to grab the in-protocol rewards. This is especially true if a ~$20m state-of-the-art commodity VDF ASIC is built in the first place. (In terms of the upfront R&D costs, the Ethereum Foundation is looking to pool funds with Filecoin and others.)

There is no rush to deliver the hardware. The Ethereum 2.0 roadmap (notably the first few phases) is unchanged with regards to the readiness of VDF hardware. Phase 0 (the beacon chain) is the short-term priority, and that will go ahead without any VDF. The VDF upgrade can come in phase 1 (sharding data layer), phase 2 (sharding state layer), or even later phases.

In regards to hardware manufacturing, this will be outsourced. We are in discussions for example with Obelisk Launchpad that can deliver a turn-key solution.

3 Likes

I would disagree. If network goes offline the randomness beacon temporarily goes down and then works again. Threshold signatures communication is asynchronous, you wait until you get enough signature shares and then combine them into the signature. If there is a split you wait it out and then operate again. You do not want to do anything during a major split - it is bad and can screw things completely. The entire purpose of asynchronous protocols is to gracefully survive networks splits - from this perspective BLS-based common coin is the best common coin ever created!

1 Like

Dfinity has a 2/3 honesty assumption (same as Ethereum 2.0 for notarisation) and targets a failure probability \rho=2^{-40}. (See section 4.2 “Threat Model” of their consensus whitepaper. When \beta=3 and \log_2\rho=-40 they get a minimum committee size of 423.)

To reach their failure probability for liveness (which they call “availability”) Dfinity assumes that 100% of their honest nodes always eventually come back online. In effect, Dfinity has a hidden “100% eventual liveness” assumption.

In the context of WW3 (for which we want Ethereum 2.0 to survive) let’s assume that just 15% of the honest nodes are permanently offline (e.g. an atomic bomb wipes out Palo Alto). Now only 56.66% of the network is both honest and live and the failure probability goes above 2^{-9}. Depending on how fast committees are refreshed, I expect it would take just a few days to get a bad committee that can stall the network permanently.

(Side note: I’m a fan of Dfinity and believe their blockchain will work well in the optimistic “no long-lasting catastrophies” setting.)

@JustinDrake thanks for taking time to address my concerns, I know you’re super-busy, I really appreciate it! :slight_smile:

Can you elaborate on this please, how would that work? My understanding was that every validator will have to own a VDF ASIC, just like miners own rigs today? :see_no_evil:

Yep, this is really positive, leaves us with a lot of time to think everything through. :slight_smile:

You’re probably referring to this analysis, and the Proof-of-Activity-like block notarization? IMO, this stacking of processes one on top of another might not be the best choice, because it adds complexity and expands the attack surface. If I got this all right, in order to create a random value, we will have the RANDAO process + Proof-of-Activity notarization on top of it + VDF calculation on top of it? Hmm…

I’ve read the paper, there is no mention of a 51% attack (I’m still struggling to find/understand how this would actually look like). The paper does mention that the beacon will pause if the network splits in two halves of more or less the same size (probably caused by a partition or a software bug), and automatically resume once the network reconnects (“Consistency vs availability” section, page 2). I personally really like this feature/approach (favoring consistency over availability), but it’s just me. :slight_smile:

One thing must be clarified. It is true but it’s not the inherent flaw of distributed randomness beacon (DRB) scheme using threshold signatures. It is because Dfinity adopts such a random sample strategy that makes this case happen. And the reason why Dfinity choose such a scheme is because of the scalability and the permissioned settings of this kind of DRB. Given the sharding nature of Ethereum 2.0, I don’t think Ethereum will need an additional random sample subroutine anymore if a threshold-DRB scheme is trying to be embedded into it.

(Side note: VDF is a cool thing and indeed it can strengthen the randomness beacon significantly.)