I’ve enjoyed reading the conversation here over the past few weeks and i agree with the threat that built-in extractable value poses. I’m surprised no one is talking about chainlink’s proposal to abstract sequencing into an oracle layer. Seems like a pretty novel approach to me, but i am justsomelurker, and now i will return to my shadows
Thank you for emerging from the shadows to contribute a solution @justsomelurker.
The Mempool-based FFS could be a good route. The oracle nodes should be incentivized to spread out geographically as much as possible so that no PoP has any particular advantage. I don’t understand how you enforce miners to respect the consensus though.
One problem with fair ordering as you’ve described it which I think is missed in the academic literature is this:
Imagine a juicy Uniswap txn A enters the mempool that everybody wants to frontrun (sadly also a years salary for the victim).
We have to assume oracle nodes are as self interested as miners. Oracle nodes each add a transaction infront of A to frontrun it, and send their view of the market.
Let’s keep it simple and have 4 oracle nodes that all want a piece of the action, so they each insert their own transactions BCD and E infront of A in turn:
Now they don’t agree what txn 1 should be, but they do all agree that A came in later and should be txn 2. Except that it didn’t and it shouldn’t. It’s irrelevant to A’s bad outcome who is infront of him, only that someone is.
If you try to fix it by weighting the consensus so that the more agreement you get about the position of a transaction the earlier it goes, you get crazy effects like a transaction that everyone agrees is last printing first in the block.
Don’t get me wrong. What you are proposing is lightyears better than the literal worst case scenario of total miner/validator/sequencer dominance leveraged by mev auctions that we have now and I would like to see it replace that in the absence of other options.
But content consensus must not be any more optional than structural consensus and averaging out different views of transaction order to achieve transaction fairness is more problematic than people realize. You’ll get my take on Monday.
Thank you for the reply. For the record i am not associated with the authors in any way, i just like lurking here, read the paper, and found it relevant to the discussion.
Let’s say there is a juicy tx from uni, if an attacker wants to extract value by reordering tx, the attacker would need a statistically significant amount of nodes to report identical ordering, otherwise it will not pass through the aggregation validations and the nodes will be recognized as outliers, subsequently booted, and staking funds lost. With the numbers of oracles in a given oracle network, nefarious behavior for self-interest is nearly impossible, + large rewards for the identification of bad actors.
Even if a majority of oracles were somehow able to identify one another and coordinate re-ordering for value extraction, the paper proposes that the total value required to bribe a majority of nodes (with an amount greater than the value staked) is significantly greater than the value available in the reordering.
Sorry if i’m missing something, thanks for the reply. Cheers.
At SKALE we are using 2/3-N-threshold encryption to provably remove MEV.
I feel like this is an underrated comment. @kladkogex I’d be curious for more details on how this works.
As I see it, the problems with MEV are mostly related to mempool transaction privacy. MEV searchers require opportunity protection from other searchers and users require exploitation protection from searchers. If transactions can be encrypted until they’re finalized, MEV would be limited to keeper-related transactions. This leaves searcher-operators who would be able to extract more value than a typical operator, but I’m not sure how you’d solve that or if you’d need to.
I agree. Alex improves the MEV situation greatly but the tx rate may go up due to stat frontrunning battles.
Here are some very very early thoughts on an encrypted mempool version (Dark Alex) if you are interested.
Thank you Tristan
We are working hard to make it easy to use. Basically you will mark one of your Solidity arguments as encrypted, and then it will be sent encrypted by the client, included into the block proposal in encrypted form, and only decrypted after the proposal is committed as a block.
The implementation on SKALE production network should be ready by this summer.
Started reading up on Alex and Dark Alex, and I have some thoughts I want to share:
- I don’t think shuffling is enough. MEV searchers need mempool privacy for keeper opportunities. They will get it one way or another, most likely through deals with large pools, which has a centralizing effect.
- If you have mempool privacy until the transaction order is established, I don’t think you even need shuffling. The one exception to this would be if we desired to stop proposers from placing their encrypted transactions at the beginning of a block, but I don’t think that’s a bad thing. I think it might be a good thing if we formalized a good keeper design as one that sent the incentives to the coinbase… but that’s a slightly different topic.
- Dark Alex suggests that encrypted transactions would hide gas prices. What if we only encrypt the sensitive parts of a transaction, and allow the encrypted transaction to be valid enough to waste gas even if it’s never decrypted?
With these points in mind, I think Dark Alex could be simplified. I’m just brainstorming here, but what if it worked more like this:
- When creating a transaction, the sensitive parts are encrypted. The encryption key is chunked and split between some selection of validators using their pubkey such that it satisfies threshold encryption honesty assumptions. All of this is included, chunks and which validator they belong to, in one transaction and sent out.
- Blocks are expanded to include both decrypted transactions and a new block draft with all encrypted transactions. It’s the block proposer’s responsibility to take the draft block from the last block that was added to the chain and decrypt it by requesting key chunks from all the validators who were selected in the transaction. Additionally, the proposer picks new encrypted transactions and creates a new draft block for the next block proposer to decrypt. From there, the block is formalized and the transaction order is attested to by comparing to the draft.
In short, the “shuffler” is replaced by encryption, the “picker” becomes the last block producer, the “printer” becomes the current block producer, and the transaction submitter chooses the “vaults” (maybe just default to the last N block proposers who didn’t miss, although I don’t think there’s an issue with a transaction submitter selecting them by custom means).
We have an implementation of Threshold Encryption that is Solidity compatible.
It is in beta now, we are looking for people willing to contribute to the project (can issue SKL grants)
This is very cool @kladkogex
Thanks for reading @Tristannn. I appreciate your time and input.
Do you mean in the original Alex? There is no-one for searchers to do a deal with because validators have no control over content. Any block which does not respect the consensus content (inc randomization) will fail attestation.
I think I have confused the issue by linking to the original Alex from the Dark Alex doc. I only meant to do that to define terms. Dark Alex doesn’t have shufflers or pickers.
As you suggest, the printer just means the block producer. I say printer because I’m trying to be agnostic to mainnet/eth2/rollups, but in eth2 it just means the validator.
The vaults are just the roles that recieve the key split. You have defined them as other validators. In reality on eth2 all of these roles will be assigned to validators to perform as part of their normal duties.
Our proposals are actually very similar.
I’m not sure the user can make assumptions about which chunk/validator they will be included in. I can see that getting messy.
I think printers need to be in charge of this. This is no longer a worry because the txs are encrpyted at this point so all the printers now have to go on when chunking txs is timestamp and gas price.
Doing this adds a +1 block delay.
By encrypting/decrypting chunks at close to network latency and having multiple chunks per block you
- preserve some time order (more if you do away with GPAs- see below)
- provide visibility of tx ordering before a block prints
- do it all in one block
well, that’s the idea anyway.
My preference is to have the content layer as decoupled from the structural layer as possible, oblivious to it in fact.
So the content layer just churns out chunk after chunk of zero-MEV txs from the mempool.
The structural layer scrabbles to catch up, writing contiguous content chunks to the blocks.
If a printer fails to write valid content chunks or leaves a gap their block fails attestation and the next printer does it right and gets the gas reward.
I’m pretty happy with it. The biggest issue I can see is that to avoid DDOS users will need to secure a bond on a smart contract so they can be penalized for spamming/invalid txs/misquoted gas prices.
One interesting advantage of that is that once you have mitigated DDOS with user bonds, you can do away with gas price auctions (less distortion of tx time order) and the gas price drops of a cliff - that’s a big change though, needs thought!
Hello! I have been researching this issue for a couple of days. Following Tristannn’s proposal, in which the last block producer is the “picker”, I would like to hear your thoughts on using the picker’s block nonce as a seed to sort the transactions on Proof-of-work, and any potential drawbacks, since:
- The picker won’t know the seed until they find the nonce for the block.
- Once they know the nonce, and therefore the sorting seed, they won’t be able to make small adjustments (like modifying gas price) to alter the tx hash and thus be positioned higher in the block, since the nonce would no longer be valid.
Then, the next block would order, using the previous block’s nonce as seed, and validate the previous block’s picks, using the new order, and pick the transactions for the next block.
Although, this solution would also add +1 block delay, as pmcgoohan pointed out.
Hi @alcercu. Thank you for joining the fray.
It’s a nice idea actually, especially as a cut the knot kind of solution.
I like the fact that you are forcing the miner to use a seed that you can prove they have seen. They cannot propose their block without admitting they have seen the previous block.
The drawbacks are:
- that you have a 1 block lag which is disruptive to existing user layers (as you pointed out)
- it mitigates transaction ordering attacks, but not transaction censorship attacks as the miners still pick txs and can still add their own (sadly thats a big deal as it means they can perform statistical frontrunning attacks while preventing anyone else from doing so/protecting themselves against it)
- you will be randomizing an entire block rather than chunks within the block
On a related note, Alex may suffer from worse statistical frontrunning attacks than I realized because more txs than I thought will fail making it cheaper for an attacker.
Alex is way fairer than what we have now and mitigates a lot of MEV, but the problem is tx bloat.
Essentially with random ordering an attacker can give themselves a better chance of a good outcome by adding n txs.
However if another attacker does the same thing, they end up with no better chance and higher tx fees.
If a third (or more) attacker does the same thing, they all end up losing big.
If the would be victim also splits their tx into multiple txs they can protect themselves again.
So Alex fixes inequality, but at the cost of increasing the tx rate (by approx: extra tx count = arb value / failed tx cost)
I don’t think the community is ready for a solution which leads to this level of tx bloat, and I’m not sure I’d want to be responsible for it.
That’s what got me thinking about encrypted mempool/fair ordering variants of Alex.
What finally turned me off random ordering (for L1- it could still work on L2) was being shown this issue #21350 where Geth randomly ordered txs with the same gas price.
Apparently it led to tx bloat from backrunning attacks, so is quite a good real world proxy for the kind of issues random ordering systems may have.
I’m still digesting my thoughts, but I just wanted to clear up one point of confusion here. My suggestion was that the transaction creator could pick the nodes who get pieces of the decryption key instead of a scheduler picking vaults. The difference is nuanced, but it allows for partially encrypted transactions where a transaction that is never fully decrypted could still be considered valid enough to waste gas, which I believe would remove the need for a user bond.
That’s a very interesting idea. It would be ideal to avoid a user bond.
So perhaps one node having the solidity code with blank parameters and another filling in the parameters, or what were you thinking?
One issue is that you are giving nodes the power to submit bad txs even if the user does supply all keys.
So you might mitigate DDOS but at the risk of users being wrongly punished.
I’m not a huge fan of Timelock Encryption itself, but this part really caught my attention:
“The moonwalk order’s ZKPs allow to prove to O that solving the reasonably constrained time-lock puzzle unlocks a valid trade Xi, without revealing the order details or the identity of its originator”
If cheap to do, that may mean being able to mitigate DDOS in encrypted mempool solutions by validating encrpyted txs without requiring a user bond.
Does anyone have any knowledge of whether this would be possible using ZKPs?
There are issues with what I’ve suggested previously, particularly with the fact that publically targeting any specific validator as a vault would expose them to a risk of being DDOSed. I’ve come up with a variation that I think might work:
- Build a transaction, encrypt the data and to fields using a unique key. Maybe rename the data field to “encData” and the to field to “encTo” just to make it clear that this is an encrypted transaction.
- Add two new fields: “vaultFee” and “encVaults”.
- VaultFee is unencrypted and exposes an incentive to both the vaults and the proposer who is sharing the decrypted transaction.
- EncVaults is an array of validator addresses chosen as vaults plus a piece of the unique key and then encrypted with the validator’s pubkey.
- Assume that the draft block thing I suggested earlier is in effect
- When a draft block is posted, validators check each transaction to see if they have been chosen to be a vault. If they have been, they broadcast their decrypted EncVault entry.
- The block proposer decrypting the block can replace the encrypted transaction for the decrypted transaction for a portion of the vault fee split with the validator vaults.
- If the block proposer posts the transaction without decrypting it, the vault fee is forfeit while the gas fees are spent and the transaction while useless is considered valid.
- vaults are incentivised to participate
- proposers are incentivised to post decrypted transactions
- transactions that cannot be decrypted still waste gas
- no one knows who the vaults are until they reveal themselves
I’m increasingly of the opinion that this problem is far easier to solve at the Dex layer than it is at the protocol/blockchain layer. A very anti front-runner measure would be for the Dex to add a poison pill that cancels multiple swaps from occurring in the same block. (With the rare “fast market” exception for ICO or highly volatile price discovery periods.)
If Uniswap simply adopted this measure, then Mev-Flashbots would collapse overnight. Even if you could guarantee bundling the target transaction, the poison pill would cancel the target and the frontrunner would have no profit opportunity. The Mev frontrunner could still try mine the frontrun transaction and censor the target transaction, in the hope that the target is mined next block. But that would involve taking meaningful inventory risk, and it’d be relatively easy to counter-manipulate frontrunners by inserting-then-canceling spoofed swaps into the mempool.
This isn’t a perfect solution. But the fact that such a simple solution would counteract the majority of frontrunner suggests that it’s a lot easier to fix problems at the Dex layer than the blockchain layer. I see a lot of brilliant people on these forums inventing fiendishly sophisticated systems to work around these problem. But it all papers over the fact that Uniswap is fundamentally broken.
MEV is more than just a DEX thing. It includes: mempool exploitation, assumed behavior like arbitrage, and specifically designed keeper transactions like Maker liquidiations.
Any solution other than mempool encryption drives MEV searchers to make deals with large pools in order to keep their transactions private, which means larger pools get to extract more value and the network is incentivized to centralize.
Flashbots solves this by allowing lots of smaller pools to operate as a single large pool for MEV, but there is no real room for individuals to participate in Flashbots due to the fact that they can exploit it the same way the public mempool can be exploited.
Agree and disagree. You are right that Mev makes up more than just front-running. It would be nice to have an elegant mempool encryption scheme that fixes everything at once.
But practically speaking front-running makes up the vast bulk of Mev revenue. If you take that away as an income source, it’s likely that many miner pools would no longer find the revenue justifies the cost of the Flashbots operation. Second, front-running is far worse from a user experience perspective. Whereas back-running and liquidation races are generally “victimless”. They’re just competing to capture some pre-existing market inefficiency, whereas front running actually makes the target poorer.
Finally, I’m not sure if mempool encryption would actually fix Mev auctions dynamics for either back running or liquidation races anyway. In both cases, some market activity triggers a profit opportunity, one that will continue to exist after the target transaction commits to the blockchain. With mempool encryption arbitrageurs won’t see the opportunity in the mempool, but they will see the opportunity after the block prints. And hence, they’ll still compete to be in the first position of the next block.
Yes! That’s brilliant. As long as an encrypted transaction wastes gas you don’t need all the extra fuss of an explicit user bond.
You then need to be sure to protect the users from their txs being left unencrypted by a censoring attacker.
One thought I had about that is that in Alex the same vaults could be assigned to a whole chunk (portion of a block) and that they would reveal the entire set of keys for that chunk in one message.
This stops the printer (block producer) from selectively censoring txs within a chunk. In Alex, the printer has no power to skip the chunk unless granted by consensus. So if the consensus can see that the vaults have revealed, they won’t allow the printer to skip, and the printer will get less or no gas reward. The next printer will decrypt the chunk any carry on claiming the gas reward instead.
I’d like to hear more from you about how the enc tx could be made to waste gas.