Limiting Last-Revealer Attacks in Beacon Chain Randomness

removing the omniscience (all-knowing) attack

This could work, but we also have other options to prevent that attack. We could have each user commit before any secrets are shared with subcommittees, or we could use verifiable random functions, in which case we don’t need commitments. Another deterministic approach would be having each user prove hash(pk, block_index) in zero knowledge, and taking that hash output as their value.

Here’s a generalization of my previous argument to k-subcommittees. Consider an attacker with stake r, and let m_i denote the number of members they control in subcommittee i. Let P(a_i) be the probability that they can perform an extended last-revealer attack, given that they have partial control of all subcommittees following i (i.e. \forall j > i, \; 0 < m_j < k).

I’ll again make the simplifying assumption that the number of subcommittees is infinite, which implies that P(a_i) is identical for all i. Hence the “partial control” condition, 0 < m_i < k, has no bearing on P(a_i), since P(a_i | 0 < m_i < k) = P(a_{i-1}) = P(a_i).

\begin{align} P(a_i) =& \sum_{j=0}^k P(m_i=j) P(a_i | m_i=j) \\ ={} & P(m_i=0) P(a_i | m_i=0) \\ &+ P(0 < m_i < k) P(a_i | 0 < m_i < k)\\ &+ P(m_i=k) P(a_i | m_i = k) \\ ={} & (1 - r)^k \cdot 0 + (1 - (1-r)^k - r^k) \cdot P(a_i) + r^k \cdot 1 \\ ((1-r)^k + r^k)) P(a_i) ={} & r^k \\ P(a_i) ={} & \frac{r^k}{(1-r)^k + r^k} \end{align}

P(a_i) decreases with k as long as r < .5, so it seems the larger the subcommittees, the better.

use their current bias to give themselves even more bias next time

I think it’s good that you’re considering the extent of the bias, rather than just the probability of nonzero bias. I did a bit of analysis of a deterministic ordered reveal scheme – see here, section 4.1. There I compared the expected block rewards of manipulative versus honest validators.

Depending on the parameters, the results were pretty close. E.g. with an epoch size of 10,000 blocks, a user (or coalition) with 40% stake could increase their expected rewards by ~0.41% by manipulating entropy. Looking at it another way, if the normal validator interest rate was 5%, they could increase their interest rate to ~5.02% with manipulation.

1 Like