Targeting Zero MEV - A Content Layer Solution

Re: my suggestion that non-intervention will become intolerable, here is a relevant piece I just had published on coindesk.

When you have data corruption in your system you are bound to get wild and unpredictable negative effects. MEV and GPAs cause high transactional data corruption. Here are some possible outcomes.

Do you have a source on HFT in traditional markets being valued at only 1 billion? That seems way too low considering the insane amount of HFT firms around the world and the billions of dollars they invest into frivolous activities like straightening fiber-optic cables undersea (A Transatlantic Cable to Shave 5 Milliseconds off Stock Trades)

1 Like

@pmcgoohan have you looked at mining_dao? https://twitter.com/IvanBogatyy/status/1394339110341517319?s=20

pretty interesting solution (not yet decentralized) where the user produces the full block & pays the miner for PoW only (yes not a solution for eliminating MEV but imo a step forward from status quo)

Hi CodeForcer,

This number is from the Financial Times (paywall)

“In 2017, aggregate revenues for HFT companies from trading US stocks was set to fall below $1bn for the first time since at least the financial crisis, down from $7.2bn in 2009, according to estimates from Tabb Group, a consultancy.”

Looking at it again, it seems to be US stocks only, so the amount for all financial instruments will be higher.

However, it is not hard to see why MEV is a much bigger problem for Ethereum than HFT is for trad-fi.

Even when Flash Trading was ubiquitous in 2009, it only gave a 5ms advantage on order visibility. NASDAQ and BATS have since banned even this. Transaction reordering has never been possible in the traditional financial markets in orders sent directly to the exchanges. Brokers like Robinhood might frontrun you- look how it’s ended up for them. I want better than that for Ethereum.

The maximum latency advantage you can get from laying your $1 billion dollar cable is probably around 300ms. As I write this there are 167,540 pending transactions in the mempool. As a miner/MEVA winner I get to pick any combination of those transactions to build a block that is entirely to my advantage as well as adding in any number of my own. Imagine if Nasdaq allowed the highest bidder to pick and reorder what is probably many hours worth of transactions. It is unthinkable, and yet that is the situation with Ethereum today.

Crucially, HFT has declined almost by an order of magnitude over the last decade, whereas MEV is rising exponentially.

Did you read further- what do you make of my ideas for a content layer bound to block attestation? (ignoring the random ordering part which is problematic)

1- In all proposed solutions here, u make ur target to wave away the control of transaction ordering from miners hands, right?

-Doesn’t this imply that users too cannot pay for a certain order in the block anymore?ie users have to understand that higher bids for transaction fees now, or for miner tips after EIP-1559, only increases the probability of inclusion in the current block but has nothing to do with the relative order inside it???
-Did I miss something or am I getting this right? and u think users will be OK with that???
.
2-with the same randomization problem existing in ur protocol as the simple hashing idea of
@stri8ed stri8ed

@marioevz marioevz
Can u explain what makes ur protocol better as opposed to the simplicity of just the order of hashes?
»Infact I think the probability of controlling the order of a resulting hash is much less?

I would prefer to take it futher and have no auction at all (whether GPA or MEVA). In this situation, you keep the EIP 1559 base fee to reflect overall demand and mitigate DDOS, but eradicate the tip (thanks @barnabe).

It is way better for users because:

  • no need to set/guess tx fees (which users dislike and which EIP 1559 is trying to address)
  • visible guarantees of order execution (tx order is quickly visible in the content layer before entering the block)
  • exploitative MEV is greatly reduced (simplest content layer=limited MEV auctions possible) or eradicated (enc/fair ordered content layer)
  • low gas costs

The low gas costs observation is potentially huge and is only just occuring to me. I am actively researching it and would love to stimulate a debate around it.

Essentially, any auction (whether GPA or MEVA) creates MEV by allowing users to bid on transaction order. In doing so we are not only auctioning off tx execution, we are also auctioning off tx priority (which is far more valuable as the MEV crisis has shown).

It is this extra value that makes it worth attackers bidding up gas costs to extract MEV. Users that are not trying to extract MEV then have to raise their bids to compete with the very attackers that are exploiting them.

Put simply, not only do auctions corrupt transaction order, they also raise gas costs (I suspect by a lot- I aim to quantify this).

Yeah what I am proposing is a systemic change. High gas prices and MEV are systemic problems.

Thoughts?

Seconding this. Very little of traditional HFT revenue is purely extractive in the same way that the front-running sandwich bots on DeFi are. The sizable bulk of HFT activity is market making, cross-venue arbitrage, or statistical arbitrage based on signals in the microstructure. In all those cases, the HFT entity is increasing liquidity and/or improving market efficiency through price discovery.

Sandwich front-runners contribute neither. There’s no permanent price impact, because the price ends at the same price as it would have without the attack. The closest analog to something traditional HFT actually does is the back runners which arb the price between different liquidity pools. Yes, HFT might contribute element of order flow prediction in the statistical sense. But it’s nothing like the way sandwich front-runners directly. In a traditional exchange order visibility and execution are atomic at the exchange gateway level, so there’s no way to know an order will arrive before it’s already filled. (The Michael Lewis book covered a very small corner case, where very large traders were sweeping liquidity at multiple venues with multiple orders, which were only predictable in the non-determistic statistical sense.)

1 Like

Great detail @Mister-Meeseeks.

Sadly not even that. It isn’t healthy price discovery if you can create or exacerbate a price imbalance by reordering/censoring txs.

Miners and MEVA winners literally create arbitrage and backrunning opportunities that would not otherwise exist and then risklessly exploit them.

And the final insult, average users that just want to get their txs executed have to compete on tx costs with the very people that have pushed the gas price up in order to rip them off (see my post above).

1 Like

This is really, really interesting. If we consider transaction ordering to be a separate layer of consensus, the answer to “can we eliminate MEV” seems to be yes. As a trivial example, if we used a light-weight PoW sidechain that restricts each block to only contain 1 transaction as the content consensus layer for our main chain, the transaction ordering would be decentralized. I feel this framework opens up some new and interesting design space for developing a practical MEV-free blockchain.

1 Like

Hi @zefram_l. Yes it seems that way to me. I wanted to distinguish between the general concept of a distributed content layer and the Alex protocol as my first attempt at designing one.

Thank you for contributing ideas for another possible implementation. Re: a full secondary blockchain, you’ll have to be sure to keep network usage to acceptable levels and not to introduce another layer of MEV. I’d like to hear your ideas as you progress them.

I’m currently looking at a very stripped down version of Alex that mitigates some MEV (I need to work out how much) and is much simpler to implement as a first version. I’ll then look at how it can be built on to provide encryption and fair ordering.

Great post! I like the distinction b/w content and structure.

Do you think that we could use the randomness in the beacon chain (RANDAO today, VDFs tomorrow) to create shuffle seeds in a way that gives us something like the shuffler approach?

1 Like

Hi @Isankar4033. Thank you for your input.

I’m not an expert on Rando but I imagine you may have synchronization/visibility issues. If the content layer is chunking the mempool every 1 to 3 secs, any external rng process must be timed to reveal a new seed just after the pickers have committed. If the seed is known before, the pickers can game the order.

I will be considering VDFs not for rng so much as for tx encryption. I think we can incentivize the content layer participants to print chunks quickly and regularly (under threat of being skipped) at which point you may be able to use quite short term VDFs to reveal enc txs once the order has been committed to, possibly several times within a block (so no UX delays). TE is also an option.

The problem with random ordering is tx bloat due to stat arb battles. Once you are sorting enc txs, even bad actors are minimally incentivized to order fairly.

On a general note: I’m on the ChainLinkGod podcast next week discussing MEV and with any luck will be speaking at EthGlobal next Friday about these issues/solutions.

randao is a temporary measure for randomness in the beacon chain until things like VDFs are ready, but yes, it’s slightly biasable by the last participant in the randao. Ultimately, I think we can assume that we’ll have good, unbiasable randomness eventually b/c this is the thing used for committee assignments and if biasable, would completely screw up security assumptions around sharding.

It feels like this could be a drop-in replacement for the shuffler layer in your proposal and potentially something with a faster route to production on mainnet eth.

1 Like

I certainly don’t want to throw hurdles up against it being adopted on mainnet. Things to consider though…

Doesn’t rando produce a new seed every block rather than intra block, or is that not the case? If every block it would mean block length content chunks and an n+1 block delay I think (although it takes most txs far longer than this to go through).

When random ordering we would need to carefully assess potential tx bloat issues…

ah yes, it’s every block, so that source randomness could only allow block-length content chunks i think. this is undesirable?

1 Like

Not necessarily, but the faster you chunk the mempool, the better you preserve time order. Then there is the impact on UX of spanning multiple blocks…

…but actually I have new data on this. Once you have discounted miner inserted txs, the average time taken for a tx from arrival at the mempool to inclusion in a block is approx 2:30 mins.

Pretty high right? About 12 blocks. So a 1 block delay (supposedly too great for Dapp devs to use submarine sends) doesn’t look so bad.

And the stdev of inclusion time… 20 minutes!

This is the cause of MEV. Extreme tx order corruption.

In this talk for EthGlobal I discuss more recent ideas for Plain, Dark and Fair variants of the Alex Content Layer protocols as well as the root causes of MEV with some real world examples given here.

2 Likes

I’ve open sourced the code and data I used in this talk for analysis of tx arrival and inclusion times and to classify Flashbots bundles as sandwiches, frontruns and backruns.

See ReadMe for methodology and limitations.

Code & Methodology

Sample Data
https://drive.google.com/file/d/1WPknOb-Y3jIGaNc-2wA3VuWkUjXxuS8O

Thanks for all the updates you’ve kept adding here! Your recent jpg transposing pixels is a very nice visualization of corrupted ordering’s effect on a data source!

After reading through some more of your material via your links, I was wondering if you’ve also described your Alex Content Layer protocols from the view of walkthrough or lifecycle of a given transaction that would be submitted, processed and finalized using Alex as the Content layer protocol? There have been some helpful versions of this type of description for a transaction walking through the current GPA mempool mechanism. For example:

and

Though not 100% sure, I think such a walkthrough/lifecycle description would help others, like me, who have a reasonably competent mental model of the current transaction mempool protocol behavior among full nodes (mining+non-mining) on mainnet. With that description, we could probably better map that mental model to the changes with Alex in how each transaction enters into the Alex protocol and then progress through the various steps until eventual inclusion (or rejection).

1 Like

Thank you for the kind words @rjdrost. This is a great idea. It would help me work through the logic of it and find any issues as well.

The walk-throughs you have linked to are eth1 right?

Has someone done a similar tx walk-through for eth2 which I’m targeting with Alex?