Thanks, I’ve skimmed through this paper some time ago but didn’t notice the “coin flipping” part. Now when I’ve found that section, I can see that they discarded functions proposed by Russell and Zuckerman (I can’t quite understand their reasons at the first glance) and tried to offer a few alternative choices. To analyze those alternatives, they “focus on the prominent case of analyzing the probability that the last stakeholder in the chain can choose herself again as one of the first possible stakeholders of the next round (denote this probability by µ)”, and then calculate that µ. I don’t think this is enough. Will have to read and think about it.
@clesaege gives an example of a 20-out-of-100 corrupt coalition and argues that “the low influence functions can be attacked easily by a minority of participants if they can coordinate”. My impression was that one of the main advantages of these functions is that they (if designed properly) can NOT be attacked this way (or at least not easily). The 2nd paper I mentioned at the end of my original post (“Lower Bounds for Leader Election and Collective Coin-Flipping in the Perfect Information Model”) defines bounds for protocols resilient against corrupt coalitions of linear size. Everything is happening in the full (perfect)-information model (all communication is by broadcast and corrupt players are computationally unbounded). The 1st paper I posted (“Random Selection with an Adversarial Majority”) describes a protocol resistant even against dishonest majority attacks (under certain conditions, of course). I’ll have to look more into this, but I highly doubt that this would be such a popular idea/model in academia if it couldn’t resist 20-out-of-100 collusion attack (especially having in mind it assumes the full-information model under which such attacks are more than expected).
Can you maybe share where did you get this info from? In every Polkadot presentation I’ve seen (even the most recent ones), they’re mentioning the coin flipping as the method they’ll probably use?
I’m a big proponent of favoring consistency over availability (one of the reasons I also like Tendermint), so I’m biased here. I’ve recently listened to a podcast where @djrtwo explains that if a partition last for more than 12 days, the Ethereum will split forever in two (or more) separate chains (because we favor availability/liveness). I don’t like this design, but again, that’s just my opinion/preference.
This is 100% correct. However, I’m afraid that VDFs could hurt decentralization and cause some sort of “social backfire”. The former because validators will have to buy “exotic hardware” (we could have all sorts of manipulations here, especially if we have one or a few Bitmain-type manufacturers) and the later is kind of hard for me to explain… A number of people I’ve spoken to about specialized VDF hardware have a bad “gut feeling” about it. Of course, we should not based any of our decisions on anyone’s gut feeling but I think it could create some sort of dark, evil ASIC-karma around our wonderful, positive Ethereum community.
Also, as I’ve said before, the whole Ethereum team (including you) is neck-deep working on Eth 2.0, and I’m not sure can/should we involve ourselves even in hardware manufacturing now?