Sorry I was not clear. I mean that you start with 2000 secret numbers. Shuffle them randomly and use 1000 of them spread over 10 blocks using 100 secret numbers each. After 10 blocks you start over with 2000 new secret numbers.
Cryptoeconomic "ring signatures"
Even still, if the commitments to 10 secret numbers are given in one deposit, and one of those secret numbers is then tied to an account, you know that the 9 other numbers are also probably tied to that account.
Won’t two such proofs for the commitment (H1Merkle branches) share the same H1root, hence an immediately proposed block will be associated with previously proposed blocks by the same validator?
hence an immediately proposed block will be associated with previously proposed blocks by the same validator?
Yes. There’s no lookbehind privacy here; everything is revealed the moment the block is broadcasted.
You can have H2 without H1 by using signatures instead?
If the validator set of epoch i1 is {pk_1, pk_2, …, pk_t} then the jth validator can send H2(sign_{sk_j}(epoch i)), and later create the block by sending the preimage (i.e., the signature sign_{sk_j}(epoch i)).
You need to send H2 from another account, but that’s true in the scheme with H1 too, so the usefulness of this idea isn’t so clear because of linkability analysis?
With signatures, the same validator can then send H2(sign_{sk_j}(epoch i+1)) without extra interaction, and so on (assuming a randomized signature scheme, otherwise you can add nonce).
This works even if the validators are selected from the global stake and didn’t commit to anything in advance, unfortunately the overhead is large, and with lookbehind privacy and linkable ring signatures (the scheme of Justin Drake) the overhead is even larger.
The private scheme with honest forks has somewhat better anonymity because only the validator knows that he’s participating in the epoch (but doesn’t have lookbehind privacy so the anonymity is incomparable to the ring signature scheme). Support for light clients is possible but it’d need interleaving of more than one epoch.
Every validator sends just 1 secret number.
So we have 2000 validators with all the same deposit amount, every validator sends in 1 secret number before epoch 10N. 100 numbers are used in epoch 10N, 100 numbers are used in epoch 10N+1, … ,100 numbers are used in epoch 10N+9. A secret number is only used once, so we have used a total of 1000 numbers in 10 epochs. The other 1000 will not be used. Before epoch 10N+10 we have received 2000 new secret numbers and we start over.
So only 5% of the validators will get a block in an epoch. This is not enough to reach finality if we are using blocks as votes, so we will need to use the current Casper scheme or something like you proposed in this post:
“When a block is created, a random set of N validators is selected that must validate that block for it to be possible to build another block on top of that block. At least M of those N must sign off.”
For simplicity I used equal deposits, but we can allow validators with bigger deposits to send multiple secret numbers. These numbers must off course be send in different transactions for them to be unlinkable.
Below is a construction which incorporates ideas from the linkable ring signature scheme and the above cryptoeconomic hash scheme. It has the benefit of simultaneously providing lookbehind privacy (in addition to lookahead privacy) and not relying on linkable ring signatures.
Construction
When validators join they commit (e.g. with a Merkle tree) to a mapping i \mapsto V_i where the V_i are secret. During epoch N1 anyone can submit a collaterised pair (E, V_N) where E is an ephemeral key and V_N is meant to be the appropriate committed secret. The pairs are shuffled to form a random ordering of the ephemeral keys, one ephemeral key per period. We distinguish two scenarios:
 The number of pairs is no greater than the number of validators. In this case proposing a block only requires signing with the ephemeral key for the corresponding period. Here the collateral is immediately returned.
 The number of pairs is greater than the number of validators. In this case proposing a block additionally requires a proof that the purported V_N matches a validator commitment. Here the collateral is only retrievable if such a proof can be provided.
Discussion
The above scheme provides lookbehind privacy in scenario 1) but not in scenario 2). To disincentive scenario 2) it suffices to make the collateral large enough so that submitting a “fake” pair (i.e. one where V_N cannot be matched to a validator commitment) has highly negative expected returns.
If fake pairs go through in scenario 1) the only harmed parties are the validators who have effectively forgone their proposer rights to nonvalidators crazy enough to risk submitting fake pairs.
This cryptoeconomic mixing was discussed in 2016, see for example here. You don’t need the new mapping, it’s enough to submit E with collateral and if the number of submissions is greater than the validators then each validator will need to endorse the E that he submitted when he creates a block and thereby not lose the collateral (maybe it’s better not to shuffle in this case). Other than the overhead, if the collateral is large then it’d be demanding on honest validators (time value of money), and if the collateral is small then one malicious submission can deanonymize everyone.
This scheme is basically layering coinjoin into the block proposal algorithm; interesting…
I do wonder what the metagame of nonvalidators guessing how many validators and how many other nonvalidators will be participating at any given point would play out… it seems like it would probabilistically hit an equilibrium where the nonvalidators with the best knowledge would in expectation be earning slightly more than zero, and most of the time the number of pairs will be less than the number of validators. Though perhaps cartels might find a way to exploit the system…
Thanks for pointing this out. And BTW, it’s great to have you on ethresear.ch I’ve been following some of your work on STARKs.
Good point!
My suggestion is to have a large collateral. I’d argue the time value of money is already compensated by having the right to propose a block. See below for an additional way to compensate honest validators via redistribution of collateral.
My gut feel is that we can force negative expectation for all nonvalidators regardless of local knowledge. Below are two improvements to the mechanism design:
 In scenario 2) redistribute collateral that has been untouched for a long enough period (say, 1 month) to the honest validators in the corresponding epoch. This way honest validators have an additional incentive to “fish” nonvalidators, including via “bluffing” (skipping a few periods to make it look like there are inactive validators, and then when a nonvalidator tries to exploit that then hit them hard by also joining in).
 To make the above even more effective, allow for post facto (after the epoch has started) validator commitment proofs to count towards the total pairs count. This also has the benefit of removing the position of power the last miner corresponding to the previous epoch has to add nonvalidator pairs up to the threshold.
Both the risk of bluffing and the risk of post facto whistleblowing act like a Damocles sword to nonvalidators.
Thanks:)
The additional collateral for mixing makes the rewards less attractive even for large stakeholders. It’s possible to claim that they should settle for less lucrative rewards (and more blockchain bloat for all full nodes) because they’d have better security, but it’s a questionable claim in general and here also due to linkability analysis between the accounts that submit ephemeral keys and the validators’ accounts (also light client support might be more secure without mixing). Besides time value of money, the larger collateral raises the bar so smaller stakeholders cannot participate, which has negative implications on decentralization.
If I understand correctly what you guys are saying, it’s about a nonvalidator who gambles that not all the validators will submit ephemeral keys for the mix, so if his gamble pans out then he creates a block and earns a reward, otherwise he loses his collateral. In the variant without lookbehind privacy this gamble is useless because you have prove that you’re a legit validator when you create the block. It isn’t really clear to me why lookbehind privacy is desirable in this context. The arguments in favor of privacy for block creators are 1) less potential for a collusion attack, and 2) less potential for DoS on a validator when he tries to submit the block that he created during his timeslot. These arguments are debatable (since there are advantages with nonprivate block creators), but either way it seems that lookbehind privacy is irrelevant.
A coin tossing scheme (“common coin”) is where parties in round X agree on a random number. I suspect that running commoncoin at each round will satisfy the lookahead privacy as defined here. The parties can use a deterministric threshold signature
to sign the current block number, the hash of the signature will determine the blockproposer. The block proposer will include the signature when submitting the block.
The mathematical question really is whether one can design a better common coin algorithm assuming presence of a blockchain. I strongly suspect that what we are discussing here can be reformulated as common coin with some additional assumptions.
One possibility would be to use the common coin algorithm of
Micali which is used in Algorand.
With block chain you could make it simple in the following way: Micali algorithm is using regular signatures. Instead of signatures, each validator could hash a random number R N times in a chain sequence where N is the number of blocks in an epoch. The resulting hash
Then at each block, each validator would reveal a next level preimage in the hash chain. This preimage would be used as a signature in Micali algorithm to derive a random bit. So the blockchain would essentially be used to implement onetime signatures and plug them into the Micali algorithm.
It isn’t really clear to me why lookbehind privacy is desirable
Lookbehind privacy is desirable to limit adaptive attacks. Below are various examples, though imagination is the limit when it comes to adaptive attacks:
 Miners in the main shard responsible for including collation header in blocks can decide (e.g. through bribing or collusion) to not include collation headers based on the identity of the proposer. Without lookbehind privacy this attack is facilitated because the proposer’s identity would be leaked at the time of collation proposal. This is at best a discouragement attack, and could be used for censorship or consensus attacks.
 Validators in the child shard can decide (e.g. through bribing or collusion) to not build upon specific collation header chains based on the identity of the corresponding proposers, i.e. go against the default fork choice rule. This could be used to increase the probability of a collation header chain reorg, e.g. for the purpose of discouragement, censorship or consensus attacks.
A block proposer solution that has both lookahead and lookbehind privacy can be achieved via something like a “block coin”
It basically works as follows:
 The block coin is an ethereum coin that will be fairly distributed among the validators based on the deposit amount.
 The validators can use coinjoin/mixers so nobody knows who owns which block coin.
 A block coin can be used to put one ephemeral key into the waiting set.
 For every block a random ephemeral key is drawn from the waiting set that can be used for signing the block.
There are some details to be filled in like:
 How many block coins do we want in distribution? To little and the mixers won’t work. To many and validators can hoard them and use them all at once to have temporary more influence.
 Coins tend to get lost, we do not want to run out of block coins. So maybe let the block coins expire after a certain time and be redistributed among the validators.
As an added bonus we can have the shard collation proposers be drawn from the same waiting set.
Looks like this is related to Mental Poker protocols  essentially you use crypto to securely shuffle cards, so that until a party reveals her card other parties do not know which party has which card
Lookbehind privacy is desirable to limit adaptive attacks. Below are various examples, though imagination is the limit when it comes to adaptive attacks:
These scenarios seem just as plausible without lookbehind privacy, the attackers can always censor the block that was created if they don’t like the contents of the block, the only distinction with lookbehind privacy is that the attackers cannot censor the block just according to the identity of the proposer who created the block. If Alice and Bob are proposers who would create the same block, then the contents of the block are likely to be much more relevant for censorship attacks (rather than the identities Alice/Bob). Also, with lookbehind privacy in place, maybe the attackers have nothing against the validator identity but don’t like the account that submitted the ephemeral key E so they’d censor the block that E created (as you say imagination is the limit). If you think that cartel censorship is a significant concern then there are more important design choices, namely not relying on a mostly static set of validators who’d create blocks in the next epochs.
So here’s how this can be done using Mental Poker see (this article for details)

Each validator has a commutative encryption key

Each validator encrypts the sequence of numbers 1,2,3,4,5,6,7,8 … using commutative encryption. As a result, each number gets encrypted by keys of all validators, and the resulting value is stored in a smartcontract.

The encryptions are shuffled so each validator V is dealt an encrypted value E. For each encrypted value, all other validators decrypt the value. The result y is then passed to V, who decrypts it using its private key and gets the plaintext number x

At this point everyone has been dealt a plaintext number, but no one knows numbers assigned to other parties.

At the time when a block needs to be proposed by a particular validator, the validator submits a proof proves that y is an encryption of x
This is in fact, similar to what Justin proposed, but uses commutative encryption to make things more secure.
So here’s how this can be done using Mental Poker
It’s useless here because the mix doesn’t need to output private randomness, only public randomness. So it’s enough to use collective coin flipping (or secure coin flipping with honest majority) with or without cryptoeconomics.
I thought private randomness was one of the requirements, so that until the block is proposed noone knows the proposer …
If a coin is flipped there is a period of time between the flip and the block propose, where everyone knows whos going to propose  arguably the proposer can be DoSed then
Here is another paper that discusses
shuffling in the presence of Byzantine parties
I thought private randomness was one of the requirements, so that until the block is proposed noone knows the proposer …
Yeah as mentioned above if you buy the arguments that such security properties are important then you can have this supposedly more secure variant, but then you inherently have honest forks (similarly to peercoin etc.) and you don’t need a shuffle (a.k.a. permutation) at all.
Edit: probably we were too vague. By saying that a protocol outputs “private randomness” this commonly means that random outputs of the protocol are delivered to specific participants and the rest don’t learn these outputs. For Justin’s protocol the random bits that decide the shuffle are public, but only the proposer learns where he was elected in the epoch. In the variant with honest forks, the random bits are also public, but only the proposer learns if he was elected in the epoch (this can be done by combining the public random bits with a decommitment of the proposer, and using hash inequality). So mental poker is an overkill because private randomness is unneeded in all cases.