Bribery-Resistant Voting Schemes for Smart Contracts


#1

This is joint work/post with Ian Miers, Oded Naor, and Ari Juels.

Recently, a number of proposals and applications have evolved on Ethereum that require voting; from contracts that use voting-based DAOs for governance or other administrative tasks, to boardroom-like votes to proposals for open source funding. Voting appears to be a natural target for Ethereum and applications. Unfortunately, the space of voting algorithms opens a new class of buying and bribery coordination mechanisms that could prove devastating in permissionless networks, either as credible threats or as actual deployed threats. Some schemes, even ones which assume robust identity, may be even more vulnerable than naive schemes.

Beyond the standard integrity and privacy properties of a voting scheme— that votes are counted correctly and voters remain secret— for many applications we want a scheme that is bribery resistant. In on-chain voting schemes today, this is not the case: users can prove how they voted and thus be paid for their vote. Moreover, this can be done with a smart contract or “Dark DAO”, making it trustless. A natural question is whether technical protections against such an attack are possible in-protocol.

Here we propose a voting scheme that is immune to bribery attacks. Doing so requires a major assumption: that some party/group of parties/ secure hardware is trusted to prevent bribery. If these parties are dishonest, then the scheme still has integrity and privacy, but not bribery-resistance. The key question we are hoping for the feedback of projects involving voting is: are these assumptions acceptable for blockchain based voting?

Interestingly, it appears that as long as trusted execution environments such as SGX exist, we must use them to ensure a scheme is bribrery resistant. With SGX, users can be bribed to put/generate their keys in an SGX enclave that only allows them to vote in certain ways (see the “Dark DAO article”).

To prevent this, we have to ensure the keys are not in the exclusive possession of such an enclave. The only way we see to do so is by using an enclave: attest that keys were honestly generated by code that does not enforce such restrictions.

The general approach to bribery-resistance— what the literature perhaps confusingly calls coercion resistance—is to allow voters to submit as many fake votes as they want, but only allow one vote to be genuine. To anyone but the Election Authority, fake and real votes are indistinguishable. The EA, which may be a single party, a group of parties, or some trusted hardware, can test if a vote is genuine or not. They can then prove to everyone that 1) all genuine votes were included 2) all fake votes were excluded 3) the vote was computed correctly.

Ignoring cryptographic details, our proposed scheme functions as follows:

  1. A voter registers by putting cryptographic material, along with an attestation of its correctness, on the blockchain.
  2. Users cast votes for a given candidate by submitting a ciphertext, the candidate name, and a zero-knowledge proof to the blockchain. They do so via an anonymous channel like Tor.
  3. For each ballot, the EA tests if a ballot is a fake vote or a real one. If real, the EA outputs an additively homomorphic ciphertext with the encryption of 1. If fake, an encryption of 0. For each ballot, it also outputs a proof that the ciphertext and test were computed correctly. These can be checked by anyone
  4. The sum of those ciphertexts is computed by everyone. For each candidate, the voting authority then gives the decryption of the sum. The sum is the number of honest votes for that candidate.

The challenge with this scheme is the need for an Election Authority. It needs decryption keys for the ballots. These keys must remain secret, otherwise a briber can use them to test for fake votes by a briber. As such, it cannot be run by a smart contract. This means either selecting a set of parties to operate as the EA or using trusted hardware. In either case, there is no risk that the EA can manipulate the election or violate vote privacy. They can, however, refuse to reveal the election results if they don’t like the outcome.

For smart contracts, there are two key requirements here that may be hard to stomach:

  1. The existence of an online, likely stake-bonded, committee during the vote. This is necessary because someone needs to hold secret data that creates an asymmetry between honest players and the attacker, stopping the attacker from gaining full insight into protocol operation.
  2. The use of SGX by users/voters for coercion resistance. Users should need use SGX only for registration, not for voting itself. As stated above, even if SGX is 100% broken, the protocol provides privacy and integrity, but not coercion resistance (or bribery resistance, as we are interested in). Voters can likely also use a shared/hosted instance of a cloud service that provides SGX, like Azure, using an additional protocol to ensure security.

So, researchers for voting projects in the space: does it seem likely that the above are palatable, and if not let’s have a discussion on the relaxed level of security we want to shoot for without them?

I’m mainly making this post to determine whether it’s worth fully analyzing or implementing such a scheme. Basically:

  • How much are projects worried about any kind of bribery attacks?
  • How much are projects worried about MPC attacks?
  • How much are projects worried about trusted hardware attacks?
  • Is requiring SGX during registration a dealbreaker? What about if this can be outsourced to a cloud instance (or e.g. bonded network of cloud instances) securely?
  • Is requiring a live staked committee a dealbreaker?
  • Are there any projects that want to commit to using such a scheme or working with us to develop it? Please contact me (phil thecirclesignthing linux themostpopularthreeletterTLD) if you have any interest in this.

(obviously, we are tabling short-term practical concerns here like getting licensed by Intel for either an attack or this defense; these are addressed somewhat in the blog post… it’s worth noting that none of the private vote buying attacks strictly require SGX, a minimal MPC can likely be made practical as well)


#2

Multi-party computation is an interesting note at the bottom. Is there a way for a subset of participants in the wider vote to attempt to generate some accumulator encryption scheme that works similar to the one you propose with SGX such that the assumptions for bribery resistance extends that an attacker needs to bribe all members of the MPC in order to gain access to the information required to decrypt the votes? I’m not sure what implications this has for actually calculating the vote, but something along these lines would be interesting to me to eliminate reliance on SGX as a solution (which creates a single point of failure in my mind).

Hopefully that makes sense, still absorbing your post.


#4

Makes perfect sense and we’ve done quite a bit of thinking along those lines. Unfortunately I believe the current best answer is “pretty much no with a lot of footnotes” (if you assume the briber has access to SGX); for any scheme where the user isn’t using SGX, the briber can pay the user to run the scheme in an enclave that only votes a certain way. MPC alone is not enough, because the entire MPC could be running in a shackled enclave and there is no way for the MPC itself to attest to the fact that it is not. If you assume a world where there is no attestation-enabled trusted hardware is available to the briber, probably.


#5

Oh, also worth adding - you have a lot more options in a setting where you assume the user is not bribed at registration. e.g. it may be able to prevent bribery after the fact, if you have something like a Senate voting scheme (where you assume Senators are honest at registration or maybe there is even some way to validate this). This is mostly what the classical literature leverages (trusted registration). Otherwise stated, in some/potentially many specific settings, receipt freeness without full bribery/coercion resistance may be OK. Whether this is acceptable probably depends on the application, would be interested to hear community thoughts.


#6

If we are concerned about hardware aided bribery attacks, it seems MPC or any cryptographic protocol does not help without eroding election integrity.

Assume that any protocol begins with some keygen functionality where the user and only the user gets a key that allows them to vote. If that protocol is run in SGX, then the enclave can simply restrict all future uses of the key. It can answer all challenges about the key, but simply never issue a real vote except if authorized by the briber.

Now, we can relax this to allow other parties to get the key and somehow then allow the user to retrieve it from them. But then they can vote as if they were the user. I don’t think this is an option anyone would take. Elections must have integrity, they ought to have ballot privacy, but bribery resistance seems like a distance third priority.

The more promising approach might be to assume honesty at setup and only protect against ex post facto bribery attempts. But to me, this seems unrealistic. If voting ability is tied to funds and those funds change hands, then setup/keygen must be repeatable. While it might be realistic to assume no one is being bribed on initial setup, the ability to redo the setup makes bribery easy.


#7

Trusted Vault Attack?

Lets say that we have a voting scheme as described above where every users keys are locked inside an sgx. An attacker hires a vault making company to make a vault that the users can be bribed to lock their hardware inside. The vault manufacturer is highly respected and his vaults can be programmed to open if the attacker does not pay the given bribe. So everyone trusts the vault makers trusted “hardware”

So the way the vault works is

  1. It only lets the user vote the way the attacker allows
  2. If will release the sgx if the attacker fails to pay the bribe.
  3. It will destroy the sgx if the user tries to break the hardware out.

So basically you can allows limit the functionality of a sgx by locking it inside a bigger peice of trusted hardware.

Have you folks considered this?


#8

Yes, we’ve considered similar scenarios at length. The philosophical tl;dr is that if n-layer trusted hardware exists, your voting protocol in the ~permissionless model needs to operate on n-level trusted hardware. It can always be easily defeated by n+1-layer trusted hardware.

Economically the question is which “n” is feasible to build for general purpose computations (e.g. part of the reason the SGX attack sucks in my opinion is that most people will likely have easy access to SGX, and SGX would not restrict their funds; the vault in your story would be a far less compelling device, though who knows what people would put up with… cough).

So for the foreseeable future I think we can assume n=1 for all practical purposes, eventually we will need economic reasoning on what n is reasonable to secure how much value.