Simplified SSLE

This document describes a maximally simple version of single secret leader election (SSLE) that still offers good-enough diffusion of the likely next location of the proposer. It relies on size-2 blind-and-swap as a code primitive. Size-2 blind-and-swap proves that two output commitments (OL1, OR1), (OL2, OR2) are re-encryptions of two given input commitments (IL1, IR1), (IL2, IR2), without revealing which is a re-encryption of which. The shuffle protocol uses a large number of these blind-and-swaps to shuffle a large number of commitments. Eventually one of these commitments is picked to be the proposer. The proposer would need to reveal themselves to “claim” this opportunity, but until the moment the block is published, no one knows who the proposer is; simply looking at the shuffle network would lead to hundreds of possible nodes that you need to take down in order to have even a 20% chance of taking down the next proposer.

Parameters

Parameter Recommended value Notes
WIDTH 2048
SWAP_TREE_DEPTH 5 2**depth - 1 swaps per slot

Construction

We maintain in the state an array of blinded commitments: blinded_commitments: Vector[BlindedCommitment, WIDTH]. We also add to the Validator struct extra fields that allow a “fresh” (ie. not yet mixed/blinded) BlindedCommitment for a given validator to be generated. We initialize the array with fresh commitments from a randomly picked set of validators.

During each slot, we use public randomness from the randao_reveal to pick three indices:

  • proposer_blinded_index (satisfies 0 <= proposer_blinded_index < WIDTH)
  • fresh_validator_index (is an active validator, eg. member 0 of committee 0)
  • shuffle_offset (an odd number 2k + 1 where 0 <= k < WIDTH / 2)

We allow the validator who owns the blinded commitment at blinded_commitments[proposer_blinded_index] to reveal the commitment to propose the block at that slot. The proposer’s blinded commitment is replaced by a fresh blinded commitment from validators[fresh_validator_index].

Finally, we perform the shuffling tree mechanism. The goal of the shuffling tree is to do a series of N-1 swaps that spread out the possible location of the freshly selected validator evenly across N possible locations, where N = 2**SWAP_TREE_DEPTH, in addition to further spreading out the possible locations of other proposers that end up in the shuffle tree. The shuffle tree of a given slot is based on the proposer_blinded_index and shuffle_offset at each slot, and looks as follows:

At depth 0, we do a swap between x and x+k. At depth 1, we do a swap between x and x+2k and another swap between x+k and x+3k. At depth 2, we do four swaps, one between x and x+4k, the next between x+k and x+5k, and so on. If, at the beginning, the validator at position x was known to be N, after running the full shuffle tree up to this point, the possible location of N could be anywhere in (x, x+k ... x+7k), with a 1/8 probability of being at each spot.

Because the WIDTH of the buffer of blinded commitments is limited, we wrap around. For example, the swaps involved in the shuffle tree for (x = 8, k = 3) would look as follows:

To ensure robustness, we don’t do a any single shuffle tree entirely within each slot. Instead, we stagger the shuffle trees. That is, we do the depth 0 layer of the slot k shuffle tree in slot k+1, the depth 1 layer of the slot k shuffle tree in slot k+2, etc. This ensures that if any single validator is malicious and reveals their swaps to an attacker, the minimum anonymity set of a proposer is only decreased from 2**SWAP_TREE_DEPTH to 2**(SWAP_TREE_DEPTH - 1).

Here is what a few rounds of the whole process would look like, tracking the probability distributions of which validators could be in which positions. For example, “50% 101, 25% 102, 25% ?” means “there’s a 50% chance this commitment is validator 101, a 25% chance it’s validator 102, and a 25% chance it’s someone else”.

Code specification of get_swap_positions

def get_swap_positions_at_depth(pivot: uint64, offset: uint64, depth: uint64) -> List[Pair[uint64, uint64]]:
    output = []
    for i in range(2**depth):
        L = (pivot + offset * i) % WIDTH
        R = (pivot + offset * (i + 2**depth)) % WIDTH
        output.append((L, R))
    return output

def get_swap_positions(state: BeaconState) -> List[Pair[uint64, uint64]]:
    output = []
    for depth in range(SWAP_TREE_DEPTH):
        randao_at_depth = get_randao(state, state.slot - depth - 1)
        proposer_blinded_index = randao_at_depth[:8] % WIDTH
        shuffle_offset = randao_at_depth[8:16] % (WIDTH//2) * 2 + 1
        output.extend(get_swap_positions_at_depth(proposer_blinded_index, shuffle_offset, depth))
    return output

Code specification of verify_and_execute_swaps

def verify_and_execute_swaps(state: BeaconState, swaps: List[Swap]) -> None:
    swap_positions = get_swap_positions(state)
    assert len(swaps) == len(swap_positions)
    for i in range(len(swaps)):
        swap, L, R = swaps[i], swap_positions[i][0], swap_positions[i][1]
        prev_L = state.blinded_commitments[L]
        prev_R = state.blinded_commitments[R]
        assert verify_blind_and_swap_proof(prev_L, prev_R, swap.next_L, swap.next_R, swap.proof)
        state.blinded_commitments[L] = prev_L
        state.blinded_commitments[R] = prev_R

Note that get_swap_positions may create swaps that overlap with each other. For this reason, it’s important to implement verify_and_execute_swaps as above, executing the swaps from first to last in order.

Simulation results

There is a script here that can run many rounds of the shuffle tree mechanism, and outputs the 20%-diffusion of the proposer location. This refers to the number of validators that an attacker would need to take offline (eg. with a targeted DoS attack) to have a 20% chance of taking down the proposer.

Here are some results. The values are given in pairs, where the left value is the average 20%-diffusion and the right value is the probability that the 20%-diffusion equals 1 (meaning that you only need to take down one validator to have more than a 20% chance of taking down the proposer).

Swaps = 7 15 31 63
Width = 32 (2.66, 0.20) (4.43, 0.12) (5.07, 0.036) -
64 (3.02, 0.118) (7.08, 0.102) (9.53, 0.040) (9.79, 0.028)
128 (3.13, 0.044) (10.3, 0.056) (18.8, 0.024) (20.0, 0.020)
256 (3.13, 0.012) (13.72, 0.046) (36.0, 0.018) (42.8, 0.022)
512 (3.02, 0.014) (17.26, 0.01) (65.31, 0.008) (89.7, 0.004)
1024 (3.03, 0.009) (19.23, 0.005) (114, 0.004) (177, 0.006)
2048 (2.98, 0.006) (20.80, 0.005) (194, 0.004) (347, 0.004)

The cost to increasing the WIDTH is relatively low; the main downside of an extremely large buffer is that it increases the chance of a missing proposer because by the time a validator is selected they have already left the validator set. Assuming a withdrawal time of 8192 slots (~1 day), this implies a maximum “reasonably safe” buffer width of around 1024-2048 (2048 would ensure that if a validator exits immediately after being added to the buffer, they would only get a wasted post-exit proposal opportunity less than 2% of the time). 31 swaps seems to be the minimum required to get a large amount of diffusion, and 63 swaps gives near-perfect diffusion: the 20%-diffusion set is close to 20% of the WIDTH, which is what you would get if you just shuffled the entire buffer in each slot.

Each Swap in the BeaconBlock takes up 7 elliptic curve points (2x BlindedCommitment + 3 curve points in the proof), so 336 bytes. Hence, 31 swaps would take up 10416 bytes. Switching to a 32-byte curve would reduce this to 224 * 31 = 6944 bytes. Verifying a blind-and-swap takes 4 elliptic curve multiplications and 3 size-4 linear combinations. A size-4 linear combination is ~2x more expensive than a multiplication, so this is equivalent to ~10 elliptic curve multiplications. Hence, the total verification complexity is ~310 elliptic curve multiplications (~31 milliseconds).

These facts together drive the choice of WIDTH = 2048 and SWAP_TREE_DEPTH = 5 (31 swaps per block).

13 Likes

Very interesting idea. I am still reading and understanding it, but I feel that it should have a wider applications to different consensuses. I have come up with some questions about the idea, but have more later:

  • Any reference document of 1-of-2 Schnorr signature scheme in size-2 blind-and-swap? Looks like it differs from the one I found from the Internet.
  • How to determine factor in size-2 blind-and-swap? Or it could be a random secret from [1.. p-1], where p is the curve order.
  • When swapping two commitments c0, c1 to c2, c3 with a random factor, how would the validator with the secret of one of the commitments know c2 or c3 is the new commitment of the validator? My understanding is that factor is only known to the swapping validator.
  • For proposer_blinded_index, fresh_validator_index, shuffle_offset, could we generate them just using a psudo-random-number generator like H(slot_index || salt)?
  • Any reference document of 1-of-2 Schnorr signature scheme in size-2 blind-and-swap? Looks like it differs from the one I found from the Internet.

The signature here has to be a 1-of-2 ring signature, so it does not reveal which of the two keys signed.

  • How to determine factor in size-2 blind-and-swap? Or it could be a random secret from [1.. p-1], where p is the curve order.

It’s just a random secret.

  • When swapping two commitments c0, c1 to c2, c3 with a random factor, how would the validator with the secret of one of the commitments know c2 or c3 is the new commitment of the validator? My understanding is that factor is only known to the swapping validator.

The validator who knows their secret x would have to walk through every commitment in the buffer and check if left * x = right. Whichever commitment that check passes for is their commitment.

could we generate them just using a psudo-random-number generator like H(slot_index || salt) ?

I think using the RANDAO is better because it makes proposal responsibilities more reliably unpredictable. We do want to minimize how far in advance you know that you will be the next proposer.

Thanks for the explanation. I have revised the code a bit to explain this part. Please check add secret check for blinded-commitment by qizhou · Pull Request #126 · ethereum/research · GitHub.

I understand RANDAO is better but just curious in the PRNG case, how safe it may be since PRNG is much easier to implement.

Hey! Really sweet proposal!

I like the simplicity and I think the numbers are pretty good as well (and could potentially get even better). I also like the overhead balance of the width=2048 and swaps=31 combo.

Some thoughts on the scheme:

Anonymity set analysis

While the numbers on the Simulation Results table look reasonable, we should also take into account the potential for malicious and offline validators, as well as the fact that the total number of validators corresponds to a smaller number of P2P nodes.

I hence modded your simulation script to support offline validators: I’m attaching a table (like yours above) but just for the width=2048/swaps=31 configuration with various percentages of offline shufflers, while also estimating the number of nodes that correspond to that anonymity set (using a 20%-80% Pareto distribution).

Validators Nodes
Offline == 0% 193 130
Offline == 5% 180 122
Offline == 10% 169 116
Offline == 15% 156 109
Offline == 20% 149 104
Offline == 25% 139 98
Offline == 30% 138 98
Offline == 35% 133 95
Offline == 40% 128 91

From the above we see that even with significant offline percentages, there is still some anonymity set provided by this scheme

Strengthening the scheme using a shuffle list

If the anonymity set provided by this scheme is not satisfactory to us, we can strengthen it by separating the list that gets shuffled and the list where proposers get chosen from. That brings us slightly closer to the Whisk design, without Whisk’s cryptography.

For example, we would still use blinded_commitments for shuffling, but after 2048 slots have passed we would copy blinded_commitments into another list shuffled_commitments and we would pop proposers out of shuffled_commitments (either FIFO or randomized).

This approach allows the maximum amount of shuffles to go through before any index gets chosen as the proposer, and based on some sloppy analysis it quickly converges to optimal results for any reasonable width/swaps combo.

The drawback is that we will need consensus code that moves the commitments between the two lists, and we will also need double the BeaconState space.

Adversarial analysis

The next step in the security analysis here would be to think of what kind of attacks an adversary could pull off in this setup.

For example, if an adversary controls 10% of the validators, not only she controls 10% of the shuffles, but she can also void another 10% of the shuffles since any 2-shuffle that includes the attacker’s commitment is basically transparent to the attacker (she knows where her token went, and hence also the other token).

I’m also curious about backtracking attacks: When Alice proposes, the adversary immediately learns some paths of her shuffle-tree; since there is only a limited number of 2-shuffle paths that her token could have taken to end up where it did. An adversary could potentially use that knowledge to get bits of information about other future proposers.

There is also various types of RANDAO attacks that could happen in get_swap_positions().


Cheers!

Looking at the swap proof using Schnorr Ring signatures, I think there may be a problem in the current implementation: Namely, if an attacker knows A1, A2, B1 and B2 with respect to a generator G, then they can compute the ring signature for any new commitment – which includes completely new commitments, so they could “swap in and out” their own validators, for example.

I think this cannot happen as long as at least one of the elements is not known with respect to the same basis G, so we can prevent this by adding another “random” generator with unknown logarithm with respect to G to to the ring signature basis prevent it.

Just mentioning for completeness that this proposal has been further analyzed by Dmitry Khovratovich in the following ethresearch post: Analysis of Swap-or-Not SSLE proposal

Do these two arguments suppose to be swap.next_L and swap.next_R?