MEV Auctions Will Kill Ethereum

Thanks Karl, that is very helpful to point out. I have conflated the two projects in my discussions and I will distinguish between them from now on where relevant.

I am extremely pleased to hear that we are attempting to reduce MEV in the sequencer and to only use auctions for the remainder. Are we doing this with the block producers too? Our aim must be to make MEV so low that it’s not worth bidding for in an auction.

Here are a few ideas to chew over:

Our problems with MEV are because Ethereum is not fully decentralized.

Block structure is fully decentralized. Blocks are proposed and validated by consensus across tens of thousands of nodes.

Block content is created by a centralized authority (miner/validator).

In short, block content is not trustless.

There is a historical reason for this. The Ethereum devs had their hands full in the run up to genesis. Creating the world’s first and best blockchain smart contract network was a massive deal and rightly took all of their stretched resources to complete.

As a result the consensus mechanism had to be largely borrowed from Bitcoin. In Bitcoin the transaction order within a block is irrelevant. Transaction censorship isn’t really a problem either, just an inconvenience. As the MEV analysis shows, this is very much not the case with smart contracts. It was understandable at the time but it’s 6 years later now and we’ve seen the harm it causes.

Addressing the hidden centralization in block content creation is where I feel our energies should be directed. I would love to see all these sharp minds getting stuck into this problem.

5 Likes

Words of wisdom!!

Blocks are proposed and validated by consensus across tens of thousands of nodes.

Ooops … why do you think there are thousands of nodes? For PoW, pools control proposals (I think there are roughly 10-20 of them)

1 Like

The node count is >10000. I get that the proposer count is far less.

But my point is whatever deficiencies the structural layer of the consensus may or may not have, it’s a lot stronger than the content layer which is… non-existent.

This is a technical problem. We need developers to fix this problem, not the market. There’s a market for stolen credit card details. Perhaps it’s wonderfully efficient and a great example of the free interplay of supply and demand. But it never should have had the opportunity to exist because, like MEV, it is the product of an exploit that never should have happened.

Just saw this (it isn’t me btw)

2 Likes

It’s a good idea, but you can do it more simply and efficiently than that with a simultaneous constant product calculation. I’ll have more to say on application layer MEV fixes like this soon (which will include a model for this). For now I am concentrating on a content layer fix.

I have an admission to make to Flashbots @thegostep. I now understand that as a short term fix, MEV-Geth reduces gas prices and transaction bloat etc, and I agree that right now on mainnet it is net positive and a force for good. My apologies for lumping it in with eth2/rollups MEV auctions.

My fears lie in organized MEV auctions/extraction continuing into eth2 and rollups.

What I feel we must avoid is fostering the same culture of entitlement with validators/sequencers that we currently have with miners.

On Monday I will post my ideas for fixing MEV in the content layer. It is sadly too late to apply them to mainnet because it is so against miner’s interests to adopt that it would likely create a fork and destabilize the network.

But we get a clean slate with validators/sequencers…

We need to start putting out the idea that as a validator/sequencer you will not be entitled to (or even be able to) exploit users for MEV the way that miners currently can, and that this is for the long term good of the network. Because it is!

Traditional finance is never going to move over to Ethereum in a serious way while it is as exploitable as it is, and if they do, it will be for the wrong reasons- because they want to exploit it themselves!

Anyway more from me on Monday, in what will be a far more upbeat thread.

I am not suggesting copying every notion of traditional finance. I am talking about the specific notion of slippage and slippage is not designed so that people can exploit other people. In this case, why we don’t have this in DeFi is not because we don’t want to exploit our users. Instead, it is because we want to provide a better UX for greedy users! I talked about just one instance in my previous comment. Another one is that in Uniswap v3.0, if all those greedy users concentrate all their liquidity on a very short interval, a whale can just buy all the reserve of one token in a trading pair with zero slippage whatsoever!!!

I’ve enjoyed reading the conversation here over the past few weeks and i agree with the threat that built-in extractable value poses. I’m surprised no one is talking about chainlink’s proposal to abstract sequencing into an oracle layer. Seems like a pretty novel approach to me, but i am justsomelurker, and now i will return to my shadows :slight_smile:

pg 48 a5511b75-559d-441c-8142-2b5226a9e332.pdf

1 Like

Thank you for emerging from the shadows to contribute a solution @justsomelurker. :wink:

The Mempool-based FFS could be a good route. The oracle nodes should be incentivized to spread out geographically as much as possible so that no PoP has any particular advantage. I don’t understand how you enforce miners to respect the consensus though.

One problem with fair ordering as you’ve described it which I think is missed in the academic literature is this:

Imagine a juicy Uniswap txn A enters the mempool that everybody wants to frontrun (sadly also a years salary for the victim).

We have to assume oracle nodes are as self interested as miners. Oracle nodes each add a transaction infront of A to frontrun it, and send their view of the market.

Let’s keep it simple and have 4 oracle nodes that all want a piece of the action, so they each insert their own transactions BCD and E infront of A in turn:

BA
CA
DA
EA

Now they don’t agree what txn 1 should be, but they do all agree that A came in later and should be txn 2. Except that it didn’t and it shouldn’t. It’s irrelevant to A’s bad outcome who is infront of him, only that someone is.

If you try to fix it by weighting the consensus so that the more agreement you get about the position of a transaction the earlier it goes, you get crazy effects like a transaction that everyone agrees is last printing first in the block.

Don’t get me wrong. What you are proposing is lightyears better than the literal worst case scenario of total miner/validator/sequencer dominance leveraged by mev auctions that we have now and I would like to see it replace that in the absence of other options.

But content consensus must not be any more optional than structural consensus and averaging out different views of transaction order to achieve transaction fairness is more problematic than people realize. You’ll get my take on Monday.

2 Likes

Thank you for the reply. For the record i am not associated with the authors in any way, i just like lurking here, read the paper, and found it relevant to the discussion.

Let’s say there is a juicy tx from uni, if an attacker wants to extract value by reordering tx, the attacker would need a statistically significant amount of nodes to report identical ordering, otherwise it will not pass through the aggregation validations and the nodes will be recognized as outliers, subsequently booted, and staking funds lost. With the numbers of oracles in a given oracle network, nefarious behavior for self-interest is nearly impossible, + large rewards for the identification of bad actors.

Even if a majority of oracles were somehow able to identify one another and coordinate re-ordering for value extraction, the paper proposes that the total value required to bribe a majority of nodes (with an amount greater than the value staked) is significantly greater than the value available in the reordering.

Sorry if i’m missing something, thanks for the reply. Cheers.

At SKALE we are using 2/3-N-threshold encryption to provably remove MEV.

I feel like this is an underrated comment. @kladkogex I’d be curious for more details on how this works.

As I see it, the problems with MEV are mostly related to mempool transaction privacy. MEV searchers require opportunity protection from other searchers and users require exploitation protection from searchers. If transactions can be encrypted until they’re finalized, MEV would be limited to keeper-related transactions. This leaves searcher-operators who would be able to extract more value than a typical operator, but I’m not sure how you’d solve that or if you’d need to.

1 Like

I agree. Alex improves the MEV situation greatly but the tx rate may go up due to stat frontrunning battles.

Here are some very very early thoughts on an encrypted mempool version (Dark Alex) if you are interested.

Also @samueldashadrach and @Nickoshi you’ve had some good ideas on encrypted txs if you want to take a look. And anyone else…

Dark Alex - An Encrypted Content Layer Protocol (Under Construction)

Thank you Tristan :slight_smile:

We are working hard to make it easy to use. Basically you will mark one of your Solidity arguments as encrypted, and then it will be sent encrypted by the client, included into the block proposal in encrypted form, and only decrypted after the proposal is committed as a block.

The implementation on SKALE production network should be ready by this summer.

Started reading up on Alex and Dark Alex, and I have some thoughts I want to share:

  1. I don’t think shuffling is enough. MEV searchers need mempool privacy for keeper opportunities. They will get it one way or another, most likely through deals with large pools, which has a centralizing effect.
  2. If you have mempool privacy until the transaction order is established, I don’t think you even need shuffling. The one exception to this would be if we desired to stop proposers from placing their encrypted transactions at the beginning of a block, but I don’t think that’s a bad thing. I think it might be a good thing if we formalized a good keeper design as one that sent the incentives to the coinbase… but that’s a slightly different topic.
  3. Dark Alex suggests that encrypted transactions would hide gas prices. What if we only encrypt the sensitive parts of a transaction, and allow the encrypted transaction to be valid enough to waste gas even if it’s never decrypted?

With these points in mind, I think Dark Alex could be simplified. I’m just brainstorming here, but what if it worked more like this:

  • When creating a transaction, the sensitive parts are encrypted. The encryption key is chunked and split between some selection of validators using their pubkey such that it satisfies threshold encryption honesty assumptions. All of this is included, chunks and which validator they belong to, in one transaction and sent out.
  • Blocks are expanded to include both decrypted transactions and a new block draft with all encrypted transactions. It’s the block proposer’s responsibility to take the draft block from the last block that was added to the chain and decrypt it by requesting key chunks from all the validators who were selected in the transaction. Additionally, the proposer picks new encrypted transactions and creates a new draft block for the next block proposer to decrypt. From there, the block is formalized and the transaction order is attested to by comparing to the draft.

In short, the “shuffler” is replaced by encryption, the “picker” becomes the last block producer, the “printer” becomes the current block producer, and the transaction submitter chooses the “vaults” (maybe just default to the last N block proposers who didn’t miss, although I don’t think there’s an issue with a transaction submitter selecting them by custom means).

1 Like

Yes!

We have an implementation of Threshold Encryption that is Solidity compatible.

It is in beta now, we are looking for people willing to contribute to the project (can issue SKL grants)

1 Like

This is very cool @kladkogex :slightly_smiling_face:

Thanks for reading @Tristannn. I appreciate your time and input.

Do you mean in the original Alex? There is no-one for searchers to do a deal with because validators have no control over content. Any block which does not respect the consensus content (inc randomization) will fail attestation.

I think I have confused the issue by linking to the original Alex from the Dark Alex doc. I only meant to do that to define terms. Dark Alex doesn’t have shufflers or pickers.

As you suggest, the printer just means the block producer. I say printer because I’m trying to be agnostic to mainnet/eth2/rollups, but in eth2 it just means the validator.

The vaults are just the roles that recieve the key split. You have defined them as other validators. In reality on eth2 all of these roles will be assigned to validators to perform as part of their normal duties.

Our proposals are actually very similar.

I’m not sure the user can make assumptions about which chunk/validator they will be included in. I can see that getting messy.

I think printers need to be in charge of this. This is no longer a worry because the txs are encrpyted at this point so all the printers now have to go on when chunking txs is timestamp and gas price.

Doing this adds a +1 block delay.

By encrypting/decrypting chunks at close to network latency and having multiple chunks per block you

  1. preserve some time order (more if you do away with GPAs- see below)
  2. provide visibility of tx ordering before a block prints
  3. do it all in one block

well, that’s the idea anyway.

My preference is to have the content layer as decoupled from the structural layer as possible, oblivious to it in fact.

So the content layer just churns out chunk after chunk of zero-MEV txs from the mempool.

The structural layer scrabbles to catch up, writing contiguous content chunks to the blocks.

If a printer fails to write valid content chunks or leaves a gap their block fails attestation and the next printer does it right and gets the gas reward.

I’m pretty happy with it. The biggest issue I can see is that to avoid DDOS users will need to secure a bond on a smart contract so they can be penalized for spamming/invalid txs/misquoted gas prices.

One interesting advantage of that is that once you have mitigated DDOS with user bonds, you can do away with gas price auctions (less distortion of tx time order) and the gas price drops of a cliff :slightly_smiling_face: - that’s a big change though, needs thought!

Hello! I have been researching this issue for a couple of days. Following Tristannn’s proposal, in which the last block producer is the “picker”, I would like to hear your thoughts on using the picker’s block nonce as a seed to sort the transactions on Proof-of-work, and any potential drawbacks, since:

  1. The picker won’t know the seed until they find the nonce for the block.
  2. Once they know the nonce, and therefore the sorting seed, they won’t be able to make small adjustments (like modifying gas price) to alter the tx hash and thus be positioned higher in the block, since the nonce would no longer be valid.

Then, the next block would order, using the previous block’s nonce as seed, and validate the previous block’s picks, using the new order, and pick the transactions for the next block.

Although, this solution would also add +1 block delay, as pmcgoohan pointed out.

1 Like

Hi @alcercu. Thank you for joining the fray.

It’s a nice idea actually, especially as a cut the knot kind of solution.

I like the fact that you are forcing the miner to use a seed that you can prove they have seen. They cannot propose their block without admitting they have seen the previous block.

The drawbacks are:

  • that you have a 1 block lag which is disruptive to existing user layers (as you pointed out)
  • it mitigates transaction ordering attacks, but not transaction censorship attacks as the miners still pick txs and can still add their own (sadly thats a big deal as it means they can perform statistical frontrunning attacks while preventing anyone else from doing so/protecting themselves against it)
  • you will be randomizing an entire block rather than chunks within the block

On a related note, Alex may suffer from worse statistical frontrunning attacks than I realized because more txs than I thought will fail making it cheaper for an attacker.

Alex is way fairer than what we have now and mitigates a lot of MEV, but the problem is tx bloat.

Essentially with random ordering an attacker can give themselves a better chance of a good outcome by adding n txs.

However if another attacker does the same thing, they end up with no better chance and higher tx fees.

If a third (or more) attacker does the same thing, they all end up losing big.

If the would be victim also splits their tx into multiple txs they can protect themselves again.

So Alex fixes inequality, but at the cost of increasing the tx rate (by approx: extra tx count = arb value / failed tx cost)

I don’t think the community is ready for a solution which leads to this level of tx bloat, and I’m not sure I’d want to be responsible for it.

That’s what got me thinking about encrypted mempool/fair ordering variants of Alex.

What finally turned me off random ordering (for L1- it could still work on L2) was being shown this issue #21350 where Geth randomly ordered txs with the same gas price.
Apparently it led to tx bloat from backrunning attacks, so is quite a good real world proxy for the kind of issues random ordering systems may have.

1 Like

I’m still digesting my thoughts, but I just wanted to clear up one point of confusion here. My suggestion was that the transaction creator could pick the nodes who get pieces of the decryption key instead of a scheduler picking vaults. The difference is nuanced, but it allows for partially encrypted transactions where a transaction that is never fully decrypted could still be considered valid enough to waste gas, which I believe would remove the need for a user bond.

That’s a very interesting idea. It would be ideal to avoid a user bond.

So perhaps one node having the solidity code with blank parameters and another filling in the parameters, or what were you thinking?

One issue is that you are giving nodes the power to submit bad txs even if the user does supply all keys.

So you might mitigate DDOS but at the risk of users being wrongly punished.

Or did you have something like this in mind @Tristannn?

I’m not a huge fan of Timelock Encryption itself, but this part really caught my attention:

“The moonwalk order’s ZKPs allow to prove to O that solving the reasonably constrained time-lock puzzle unlocks a valid trade Xi, without revealing the order details or the identity of its originator”

If cheap to do, that may mean being able to mitigate DDOS in encrypted mempool solutions by validating encrpyted txs without requiring a user bond.

Does anyone have any knowledge of whether this would be possible using ZKPs?