If I understand the problem correctly: you are proposing a scheme where users, identified cryptographically, can vote on something, but are disincentivized from selling their votes to each other (“collusion resistance”).
If I understand the solution correctly: users are to be indexed, in exchange for a deposit, in a centralized registry operated by a trusted third party, and may update their identity there by providing proof of it. Both updates and votes are sent signed and encrypted to the operator, so no one can enumerate the full history of a user’s actions, even though the user can prove any individual update or vote by producing the corresponding signed message that encrypts to one that has been put on-chain. Therefore, no one can really trust another user they are trying to buy a vote from, who may be hiding the fact that what they are selling is no longer in the registry.
I may be missing some design criterion here, but why not just require a user to cast a vote, secretly, at the time they make the deposit? If the trusted operator can be so omniscient as to enforce setup rule 1, they can be equally certain that the vote is genuinely by the person casting it, and the secret ballot works just as it does in real life to prevent selling one’s vote.
Actually, rule 1 is the big problem that consensus mechanisms need to solve (who gets to participate), and I think it deserves a little more respect as a big problem here, because without it, there’s nothing stopping a sockpuppet army or plutocracy from taking over.
Rule 2, in my experience, has no teeth: unless you want people to turn up in their physical person to vote, they are anonymized enough that the most secure way to identify them is actually with the key they are submitting. You may as well just say that a user is their key. I suppose you could argue that the user could be globally identified by one master key, but submit a secondary key signed by the first one to this particular contest, but that just kicks the problem up to verifying the master key (the analogue of showing up in person) as well as introducing another layer of trusted centralization.
Okay, but let’s say the voters really need to be able to change their minds, and that the system for selecting them is satisfactory, and that we even manage to do this decentrally. Then I think that the argument for collusion-resistance is similar to the common internet security pattern, “After creating an account, you must immediately change your password through the encrypted connection.” At that point, an attacker has no idea “who” you really are, even if they knew your account creation data.
However, the two have the same vulnerability to shoulder-surfing (as it were). In fact, a conspiracy dedicated to buying the election could make monetary offers in advance and in exchange for installing a monitor on the various sellers’ computers in order to see what their actual messages really were. This might even satisfy the sellers as to personal security if the monitor is unable to access their authentication data or private keys (say, by literally being a camera pointed over their shoulder at the voting app).
So although I agree that this scheme severely demoralizes attempts to fix the election by soliciting bribes, it seems to have a real-world vulnerability to a prepared adversary due to its dependence on the secrecy of the message history.