The general idea in the “Collective Coin Flipping” paper is also known as “low-influence functions”. Another paper on the topic by @iddo and others is this one. A downside of low-influence functions (e.g. see this post by @cleasage) is worsened collusion resistance.
Dfinity’s threshold scheme (which I understand Polkadot also intends to use) is quite cool. The main downside is liveness: if a significant portion of the network goes offline then the randomness beacon—and the whole blockchain!—stalls. (The more minor issues are quantum insecurity, a complex secret sharing setup, and adaptive attacks on infrequently-shuffled committees.) One of the design goals of Ethereum is to survive WW3. Even if 90% of nodes go offline we want the network to continue running.
From a consensus perspective, I think @vbuterin is relatively confident that the Ethereum 2.0 protocol can be made robust enough to run on RANDAO despite it being a “weak” source of entropy. I think the best-case scenario of running on pure RANDAO is that the relevant design parameters—honesty assumptions, committee sizes, committee thresholds, etc.—are worsened to take into account the biasability of RANDAO.
A few remarks on the VDF approach:
It is the only known approach that is simultaneously un-biasable and un-stoppable (under a minority liveness assumption).
As pointed by @dlubarov the VDF strictly improves the RANDAO beacon. Even if the VDF is completely broken—i.e. outputs can be computed instantly—we fall back to the underlying RANDAO. As such, the potential for backfiring is limited.
Part of the value of a VDF-based randomness beacon is to expose strong randomness to dApps via an opcode. Replicating this critical piece of infrastructure at L2 without L1 support (such as VDF rewards and forced RANDAO participation) is hard/impossible.
I may not have been clear. The VDF phase works with an honest minority assumption.
According to the randao analysis post by @vbuterin, if you select a large enough committee size for RANDAO, you mitigate a lot of these issues.
I think the main thing here to look at is in what circumstances did the Dfinity team design their beacon. If you do some reading in their first white paper, you will notice that the protocol can halt in the case of a 51% attack (or any attack, if I recall correctly). Also, they rely on the honest majority assumption for the security of the network against 51% attacks (like most blockchains do).
In the previous replies, I might have come off as saying the only issue with the Dfinity beacon is its susceptibility to 51% attacks. But, as far as I know, all the most commonly used blockchains are susceptible to 51% attacks as well (and for Bitcoin, 33% attacks). Unless Ethereum implements the 99% consensus algorithm, in the short term, it won’t be free from 51% attacks.
Essentially, what this comes down to, it what are the tradeoffs that the Ethereum protocol designers want for the beacon and the network? BLS signatures might be expensive to do on-chain,etc but they are more proven in the wild than other beacons out there. But, maybe the protocol designers want to be able to have the beacon run on chain,etc.
All in all, all of these questions are very important to answer to that we can come up with a beacon that works for Ethereum
Thanks, I’ve skimmed through this paper some time ago but didn’t notice the “coin flipping” part. Now when I’ve found that section, I can see that they discarded functions proposed by Russell and Zuckerman (I can’t quite understand their reasons at the first glance) and tried to offer a few alternative choices. To analyze those alternatives, they “focus on the prominent case of analyzing the probability that the last stakeholder in the chain can choose herself again as one of the first possible stakeholders of the next round (denote this probability by µ)”, and then calculate that µ. I don’t think this is enough. Will have to read and think about it.
@clesaege gives an example of a 20-out-of-100 corrupt coalition and argues that “the low influence functions can be attacked easily by a minority of participants if they can coordinate”. My impression was that one of the main advantages of these functions is that they (if designed properly) can NOT be attacked this way (or at least not easily). The 2nd paper I mentioned at the end of my original post (“Lower Bounds for Leader Election and Collective Coin-Flipping in the Perfect Information Model”) defines bounds for protocols resilient against corrupt coalitions of linear size. Everything is happening in the full (perfect)-information model (all communication is by broadcast and corrupt players are computationally unbounded). The 1st paper I posted (“Random Selection with an Adversarial Majority”) describes a protocol resistant even against dishonest majority attacks (under certain conditions, of course). I’ll have to look more into this, but I highly doubt that this would be such a popular idea/model in academia if it couldn’t resist 20-out-of-100 collusion attack (especially having in mind it assumes the full-information model under which such attacks are more than expected).
Can you maybe share where did you get this info from? In every Polkadot presentation I’ve seen (even the most recent ones), they’re mentioning the coin flipping as the method they’ll probably use?
I’m a big proponent of favoring consistency over availability (one of the reasons I also like Tendermint), so I’m biased here. I’ve recently listened to a podcast where @djrtwo explains that if a partition last for more than 12 days, the Ethereum will split forever in two (or more) separate chains (because we favor availability/liveness). I don’t like this design, but again, that’s just my opinion/preference.
This is 100% correct. However, I’m afraid that VDFs could hurt decentralization and cause some sort of “social backfire”. The former because validators will have to buy “exotic hardware” (we could have all sorts of manipulations here, especially if we have one or a few Bitmain-type manufacturers) and the later is kind of hard for me to explain… A number of people I’ve spoken to about specialized VDF hardware have a bad “gut feeling” about it. Of course, we should not based any of our decisions on anyone’s gut feeling but I think it could create some sort of dark, evil ASIC-karma around our wonderful, positive Ethereum community.
Also, as I’ve said before, the whole Ethereum team (including you) is neck-deep working on Eth 2.0, and I’m not sure can/should we involve ourselves even in hardware manufacturing now?
I wonder why you’re asking that, I just quoted and addressed a few things that @JustinDrake mentioned in his comment (which is right above your previous comment)? Or maybe I’m going mad and seeing comments which aren’t there?
It was a private discussion with Fredrik Harrysson (Parity CTO) early July in Berlin. It sounds like plans have changed since then.
This is incorrect. Validators do not have to be VDF evaluators. Also, VDF evaluators do not need to be validators. That is, the two roles are completely decoupled.
VDF ASICs are radically different to mining ASICs in pretty much all respects. I hope education will allow the community to be well informed regardless of bad feelings about mining ASICs. A few points worth mentioning:
Sequential vs parallel: VDFs are inherently sequential, i.e. massive parallelism does not help at all (the exact opposite to PoW mining). @djrtwowrote a post about this. To give you an idea, the optimal parallel time circuit for modular multiplication (the basis for the RSA-based VDF we are considering) likely only uses a few mm^2 of die area (e.g. 3 mm^2).
Power consumption: Current estimates suggest that a VDF evaluation would consume about 10 Watts. Assuming we have 10,000 VDF evaluations at any point in time (I am expecting only ~1,000, but let’s be conservative) that would amount to 0.1 MW. Compare this to today’s Ethereum mining which is about 2.3 GW (23,000 times more power intensive).
Security margins: From the point of view of the VDF-based randomness beacon, the protocol can bake in a conservative security margin in terms of the speed advantage an attacker can have without getting any influence over the randomness. This is the A_{max} parameter defined here, and it will be carefully chosen.
In-protocol rewards: For the VDF-based randomness beacon to function smoothly, I estimate in-protocol rewards to be ~$5K per day. Over a decade that corresponds to $18.5m. This is too low for a rational actor to build a somewhat faster proprietary ASIC to grab the in-protocol rewards. This is especially true if a ~$20m state-of-the-art commodity VDF ASIC is built in the first place. (In terms of the upfront R&D costs, the Ethereum Foundation is looking to pool funds with Filecoin and others.)
There is no rush to deliver the hardware. The Ethereum 2.0 roadmap (notably the first few phases) is unchanged with regards to the readiness of VDF hardware. Phase 0 (the beacon chain) is the short-term priority, and that will go ahead without any VDF. The VDF upgrade can come in phase 1 (sharding data layer), phase 2 (sharding state layer), or even later phases.
In regards to hardware manufacturing, this will be outsourced. We are in discussions for example with Obelisk Launchpad that can deliver a turn-key solution.
I would disagree. If network goes offline the randomness beacon temporarily goes down and then works again. Threshold signatures communication is asynchronous, you wait until you get enough signature shares and then combine them into the signature. If there is a split you wait it out and then operate again. You do not want to do anything during a major split - it is bad and can screw things completely. The entire purpose of asynchronous protocols is to gracefully survive networks splits - from this perspective BLS-based common coin is the best common coin ever created!
Dfinity has a 2/3 honesty assumption (same as Ethereum 2.0 for notarisation) and targets a failure probability \rho=2^{-40}. (See section 4.2 “Threat Model” of their consensus whitepaper. When \beta=3 and \log_2\rho=-40 they get a minimum committee size of 423.)
To reach their failure probability for liveness (which they call “availability”) Dfinity assumes that 100% of their honest nodes always eventually come back online. In effect, Dfinity has a hidden “100% eventual liveness” assumption.
In the context of WW3 (for which we want Ethereum 2.0 to survive) let’s assume that just 15% of the honest nodes are permanently offline (e.g. an atomic bomb wipes out Palo Alto). Now only 56.66% of the network is both honest and live and the failure probability goes above 2^{-9}. Depending on how fast committees are refreshed, I expect it would take just a few days to get a bad committee that can stall the network permanently.
(Side note: I’m a fan of Dfinity and believe their blockchain will work well in the optimistic “no long-lasting catastrophies” setting.)
@JustinDrake thanks for taking time to address my concerns, I know you’re super-busy, I really appreciate it!
Can you elaborate on this please, how would that work? My understanding was that every validator will have to own a VDF ASIC, just like miners own rigs today?
Yep, this is really positive, leaves us with a lot of time to think everything through.
You’re probably referring to this analysis, and the Proof-of-Activity-like block notarization? IMO, this stacking of processes one on top of another might not be the best choice, because it adds complexity and expands the attack surface. If I got this all right, in order to create a random value, we will have the RANDAO process + Proof-of-Activity notarization on top of it + VDF calculation on top of it? Hmm…
I’ve read the paper, there is no mention of a 51% attack (I’m still struggling to find/understand how this would actually look like). The paper does mention that the beacon will pause if the network splits in two halves of more or less the same size (probably caused by a partition or a software bug), and automatically resume once the network reconnects (“Consistency vs availability” section, page 2). I personally really like this feature/approach (favoring consistency over availability), but it’s just me.
One thing must be clarified. It is true but it’s not the inherent flaw of distributed randomness beacon (DRB) scheme using threshold signatures. It is because Dfinity adopts such a random sample strategy that makes this case happen. And the reason why Dfinity choose such a scheme is because of the scalability and the permissioned settings of this kind of DRB. Given the sharding nature of Ethereum 2.0, I don’t think Ethereum will need an additional random sample subroutine anymore if a threshold-DRB scheme is trying to be embedded into it.
(Side note: VDF is a cool thing and indeed it can strengthen the randomness beacon significantly.)