In the family of VDFs we are considering there are at least 4 setups which could work in practice. Two are fully trustless (class groups, nothing-up-my-sleeve RSA moduli), one is quasi-trustless (RSA MPC), and one is trusted-in-theory-but-ok-in-practice (RSA-2048). IMO the best-case scenario is for the RSA MPC to be feasible.
(As a side note, the modulus is always public. It’s the factorisation of the modulus that should be secret.)
Note that the ceremony for an RSA modulus uses different crypto to the Powers of Tao ceremony.
As indicated by @denett this does not work because the sampling process will weaken your honesty assumption. (Dfinity’s sampling weakens the global 2/3 honesty to 1/2 local honesty.) Sampling is required because the Distributed Key Generation (DKG) scales quadratically with the number of participants, and in practice you can’t get much more than 1,000 participants.
Another thing to consider is that there are two ways in which the Dfinity beacon can fail. Citing the whitepaper: “We treat the two failures (predicting and aborting) equally”. By improving liveness you make the readomness beacon easier to predict.
Finally, the RANDAO + VDF approach allows for arbitrarily low liveness assumptions. (For example, we could have a 1% liveness assumption by making the RANDAO epoch longer.)
In practice the actual randomness (as returned by the randomness opcode) will be a 32-byte hash of the VDF output. As such, the VDF outputs and the evaluation proofs are just “witnesses” and do need to be part of the beacon state. As for VDF output hashes, it may make sense to store the last n (e.g. n = 1024) in the state and push the rest to a double-batched Merkle accumulator.
The VDF output r_i should get included by epoch i + A_{max} + 1. From the point of view of the application layer, the randomness opcode will return 0x0
until r_i is included onchain. So any delay should be handled by the application.
From the point of view of RANDAO using the r_i we can set N be to conservative, e.g. N = A_{max} + 3. In case of a catastrophic failure we need the spec to specify some sort of behaviour. My gut feel is that gracefully falling back to RANDAO is perfectly OK.
I had a quick look and it seems to be basically RANDAO.
Right, forkfulness at the RANDAO level is a grinding opportunity for RANDAO + VDF. Applications that need to protect themselves from this grinding opportunity (e.g. billion-dollar lotteries) need to wait until Casper finality of the randomness to get the similar guarantees to Dfinity/Tendermint.
Strong liveness allows dApps (if they so choose to) to operate in the context of weak finality when strong finality is not available. In other words, strong liveness pushes the “safety vs liveness” tradeoff to the application layer instead of stalling all dApps unnecessarily when finality cannot be reached.