MEV Auctions Will Kill Ethereum

First of all, I totally agree that Uniswap are at fault. There are all kinds of things they could be doing differently. The composability that Defi sees as such a great feature is a MEV/exploit nightmare, as are many other aspects. I’ll post about Unis woes another time.

But there is a more fundamental problem than this…

MEV as we know it today is just the first and currently most obvious effect of severe transactional data corruption in Ethereum (extreme deviation from send time ordering). MEV seems like a DEX problem because DEXs are what Ethereum is mostly used for right now.

Imagine a retail market where you send an order for groceries. Your local food stall sees the transaction and can bike it over to you in 5 minutes at the best price. But Walmart have paid the MEV Auction winner to censor all competing transactions from the block. You pay more, the local store closes down.

That’s one example. How about when healthcare starts using Ethereum? How about the military?

We have a severe transactional data corruption issue in Ethereum. Coders/computer scientists know that data corruption leads to unpredictable and severe negative effects in software.

MEV Auctions worsen this already intolerable data integrity issue because they push transaction order corruption to it’s extreme.

If we don’t fix this data integrity issue then Ethereum has failed (whether it is adopted or not) because it is currently worse at transaction processing than a regulated centralized competitor.

1 Like

In a way, don’t oracles solve this problem by requiring a consensus model in submitted off-chain data from multiple parties?

If the mempool is encrypted, and multiple parties are involved in selecting the portion of the mempool to operate on, followed by multiple parties executing the block creation, and the block only being included if a quorum of parties agree on correctness, then any attempt to arbitrage the transactions has to occur on multiple randomly selected participants at the same time. The parties ordering the mempool transactions don’t know what they are, and the parties executing on the order selected by quorum can’t manipulate them without their block failing attestation.

The submission for selection and inclusion could be published after the fact as an audit trail, and participants who are frequently loosing quorum could be removed from the pool. There would be a single decision maker in terms of analyzing the proposed blocks for quorum but the work is deterministic and auditable, so it seems like it should be harder to attack.

In a proof of stake model, it doesn’t seem like there should be a requirement to have only a single party proposing a block from a set because you aren’t throttled by solving the proof of work party. All of the parties involved could share in the gas fee for the transaction, which might increase to cover the extra work done for security.

I suspect there are some flaws in this analysis, but carrying forward the model of how Oracles already solve this problem for off-chain data might be applicable?

1 Like

Do you mean like the Chainlink fair ordering proposal @jlivesay?

@justsomelurker mentioned this earlier…

@pmcgoohan Yes, this looks similar, thank you. I had missed that comment on the read through it looks like.

Did this proposal have flaws, or just not get traction?

It seems like Solana’s proof-of-history model is their attempt to provide deterministic ordering as well.

From this conversation, if you combined the approaches of threshold encryption to keep the mempool private until sequencing was established, and then used something like the FSS approach to ensure the blocks weren’t tampered with by validators/miners after receiving the sequence, do you get fair transaction execution at the cost of more work? Seems like transaction fees would need to nominally increase to cover paying the extra parties, but with the benefit of security. Perhaps it’s an optional path a user can select, since many transactions aren’t concerned with this.

You read my mind. I am collaborating with experts in the Aequitas method to develop a variant of the Alex protocol with implementable fair ordering.

If possible it would be great to combine it with encryption. Not knowing the contents of the transactions you are ordering is the best protection possible against collusion. You may as well order enc txs by timestamp because it is the least work and not doing so gives you no advantage. Fair ordering is still required to ensure minimal tx order corruption and avoid unexpected negative outcomes and just to combine different views of the mempool effectively.

I would really like to move the conversation over to this, because it has had the least thought so far. Any EIP 1559/GPA experts please chip in. I’m thinking out loud here:

  • The content layer will require something like the base fee in EIP 1559 but without the tip
  • The base fee will rise and fall depending on how full the chunks get (ie: demand)
  • When the maximum chunk size gets hit, gas price will continue to rise based on how many multiple contiguous full chunks there are so there is no upper bound to the gas price
  • There is no longer a tip because with fair ordering no-one can be bribed to prioritize txs (GPAs and MEV are the two primary sources of tx order corruption in Ethereum)
  • This means lower gas fees
  • It also mitigates the tx bloat caused by GPAs (the only issue MEVA really addresses)
  • Validators will get paid less than miners, but this is fine because their hardware overheads are way lower (proof: we already have 133675 validators securing eth2 and none of them are even getting gas fees yet)
  • If Validators want to make more they can sign up as content layer providers too (or this will be standard in the software)
  • If the community is married to the 1559 burn, then we could figure out a way of burning part of the base fee?

Note that this is not the only reason the tip exists in 1559. When basefee is too low and demand for room exceeds the block supply limit, transactors can compete against each other with the tip to be included in priority, while those who keep the tip to its minimal level must wait until all higher-tip users were included (provided the basefee hasn’t risen above the max fee they declared).

There exists tipless mechanisms (see Roughgarden’s Section 8.5) but they aren’t resistant to off-chain payments between transactors and block producers (basically people reproduce the tip, except off-chain, so might as well include it from the get-go)

1 Like

Thank you for contributing. Wonderful, it’s nice to know it even has a name. From a quick peek it seems to still involve the beloved burn?

With a content layer you can’t bribe the block producer or their block will diverge from the consensus and fail attestation.

Of course you can try to bribe the participants in the content layer instead so this must be robust against collusion.

Alex (random order version) is pretty good on that front. The challenge is to keep this property when moving to fair ordering.

Thinking out loud, I wonder if the solution is to build a service for reliable, high-precision, consensus based timestamp signatures of when transactions arrive in the mempool. Then fair ordering could be enforced by requiring valid blocks to sequence transactions in mempool timestamp order. (Alternatively this could be enforced at the Dapp layer, by having the smart contract puke if it sees out of order timestamps in a single block.)

This avoids the Sybil attack vector of random ordering, where a frontrunner can spam O(N) transactions to near certainty of winning the auction. A target can never be frontrun, unless the attacker can corrupt the timestamp signature service, and get a “backdated” transaction. This is basically how centralized exchanges enforce time priority. Atomic clocks at the order entry gateway attach a timestamp, and the matching engine respects the original timestamps, even if it receives them out-of-order. This allows a distributed system, with multiple clients and gateways to achieve FIFO consistency guarantees.

Practically speaking, I’d imagine such a system would be based on K-of-N consensus of validators running synchronized atomic clocks. A client would broadcast her transaction to the mempool, which includes a small reward for the earliest K validators that sign the transaction. The canonical timestamp would be the Kth earliest one. (In practice, I’d expect clients to bias towards geographically colocated validators to minimize latency.)

The challenge would be punishing fraudulent timestamp validators. A naive approach would be to let bounty hunters collect fraud bonds if they could bait a validator to backdating a transaction. However frontrunners might run externally honest validators who only corrupt their own internally generated transactions. There’s a lot details that would need to be worked out here…

I’ll keep this short to not waste any time, since I do not consider myself an expert, and I do consider the time of the experts here to be highly valuable.

An Idiot’s Dumb Idea to Hinder MEV Efficiency

The way that blocks are produced does not enforce any kind of ordering behavior. I think that during block production, miners could order transactions by sorting them from least to greatest, according to their transaction hashes casted as an unsigned integer.

Sorting by Tx Hash as a Uint256

Under this model, a newly proposed block would only be valid given that the transactions within the block are sorted least to greatest by their transaction hash (in addition to its other constraints). In this way, MEV would gain uncertainty. As far as I can tell, this would be a relatively low overhead way to improve transaction fairness. Verifying that blocks are properly ordered would only require to carry a single transaction hash value between transactions, and it would be compared to the next transaction hash in the proposed block, and if at any point a previous transaction hash is greater than its next transaction hash, then the block would be rejected. (Then again, this is the idiot’s dumb idea… if it’s immediately obviously impossible because of something that I do not understand, then you can just ignore the rest.)

Frontrunning Doesn’t Go Away

The way that frontrunning could occur under this new model would require that a malicious actor vary the maximum gas they are willing to pay (to produce a new transaction hash), using it as a kind of nonce to find a lower valued transaction hash than the hash of the target transaction they want to frontrun. Also, in order to exit the position after their target transaction, they would need to mine for a second value that is greater than their target transaction hash. It doesn’t seem like that big of a deal to do, maybe just a minor inconvenience (pun intended).

Currently, the default ordering that geth uses is by descending gas price. Because of that fact, frontrunning has been a function only of submitting a higher gas price than the target transaction. Under this model, we introduce a second parameter, which is actually enforced at the block / consensus level. Gas price bidding wars don’t go away, but it should make frontrunning less efficient, because of the presence of other transactions within the block. A malicious actor might be able to find a hash that is lower than their target transaction, but then again another malicious actor might have a closer hash in between the target’s hash and the first malicious actor’s hash. This uncertainty should make frontrunners less willing to play these games against the users of Ethereum.

Another benefit seems to be that as frontrunners would need to alter their gas value, it could potentially require them to have more ETH on hand, because although any extra gas submitted to the transaction is refunded, by increasing the gas limit to find a better hash they temporarily use more of their own funds, increasing the up-front costs of this attack vector. This increase in cost could be negligible, though.

Lastly, miners should still accept transactions according to the highest gas prices, because the only constraint the block cares about is that these transactions are sorted by transaction hash. They would start by grabbing the highest gas priced transactions from the mempool, and then submit these in order, according to their transaction hashes. Although this doesn’t eliminate MEV, it might slow it down considerably. It could buy us time to eliminate it altogether, which would be the best thing for Ethereum imo.

I have little clue what kind of far reaching consequences this idea could have. It might make MEV auctions practically impossible (imagine finding the correct hashes for every frontrun transaction in a block, in addition to finding the best “arbitrage” opportunities). Yet, even if this is completely wrong, I still hope that it inspires some insight, and if anything I could see this post being beneficial in that it conveys that there are other people out there contemplating this problem. Thanks for your time!

one minor point that it’s also possible to have deterministic, unpredictable orderings in protocol once we feel comfortable with a source of unbiasable randomness (RANDAO today, VDFs tomorrow). MEV within a single chain could be mitigated this way instead of extraction by MEVA.

1 Like

Sadly with MEVA (MEV Auctions, eg: MEV-Geth) hash manipulation to place your tx exactly where you want it is trivial and without MEVA it may worsen GPA battles to do the same thing.

@lsankar4033 and @jmb-42
Thanks for joining the discussion guys. Unfortunately (although equitable) random ordering incentivizes tx bloat and worsens data corruption (divergence from send time ordering).

This has prompted me to look at encrypted/fair ordered versions of Alex.

1 Like

Unfortunately (although equitable) random ordering incentivizes tx bloat and worsens data corruption (divergence from send time ordering).

Does it have to be random though? What if every transaction had to include the number of microseconds since the start of epoch x, and the blocks had to include transactions sorted by that time? We can find a good algo for determining x.

How do you ensure the “number of microseconds since epoch” included isn’t just made up to fit the needs of the frontrunner?

How do you ensure the “number of microseconds since epoch” included isn’t just made up to fit the needs of the frontrunner?

Because it is part of the signed transaction, so it cannot be changed by the miner or validator.

But someone wanting to do a sandwich attack can just use arbitrary values for his txs to get the desired result…

Yes, you are right. Thanks for explaining!