Proposer/block builder separation-friendly fee market designs

with EIP-1559 will it be the highest fee or the highest tip bundle?
I mean if fee is burned, naturally the miner/proposer will care about the max tip.
-Also, with 2-level solution I kind of worry about one more possible level of malicious collide one should worry about. I think it is similar to “solvers” in the Gnosis protocol if I remember the name right… now we have to protect from a malicious builder trying to deceive the miner, malicious miner trying to front-run a victim user, malicious couple of builder-proposer having more power to victimize users or hurt the system.
3-Allow me to express a fear/worry after reading a lot of papers about MEV, front-running, sandwich attacks,…etc. I kind of feel with all these MEV suggestions that u r like trying to reconcile with miners after the fee burning policy by giving them a piece of MEV from users, which “may” (I’m no expert) hurt the overall Ethereum market in the long run
Sorry if my Qs were conceptual & trivial, since I haven’t done real Ethereum development
»»» I must add that if u r going to use some of the Gnosis Protocol ideas, u have to take enough precautions of all their previous exploits, although I don’t find an isomorphic to disregarded utility in ur solution; just added the term incase someone else notice something I missed

1 Like

To me idea 2 sounds much more favorable because it doesn’t require any consensus changes.

One way to fix the DoS issue is to use a threshold encryption committee:

  1. The committee provides an encryption key for each slot.
  2. Block builders encrypt their bundles with this key and send them (with plaintext headers) to the proposer.
  3. The proposer publishes a commitment to one of the bundles (selected based on the fee in the header).
  4. Upon seeing the commitment, the threshold committee publishes the decryption key.
  5. The proposer decrypts the bundle and creates the block.

This doesn’t have the same DoS problem as headers are at all times attached to their (encrypted) bodies, so there are no unavailable proposals. Invalid ones or ones that are unlikely to be accepted can be filtered early at the network level.

It does rely on an honest-majority committee, but since it’s not at the consensus layer and fully opt-in I don’t think many proposers or builders would mind. Also, different proposers could use different committees if they don’t trust the same ones, as long as block builders trust them too.


Even if the block builder ends up paying 0.5 of the proposer fee, an attack may still be profitable. Suppose we have proposer1 and proposer2, but proposer2 also runs a malicious builder1 and a colluding builder2. Builder1 sends proposer1 blocks, paying 0.5 fee and never publishes the body. Proposer1 is unable to complete any block and always gets just 0.5 (paid by builder1). Proposer2 always gets 1 fee, and its colluding builder2 earns block builder profit. The colluders (proposer2+builder1+builder2) always earn 0.5+profit (0.55 in the example above), while proposer1 always gets 0.5. It makes collusion more profitable than the default behavior, which might lead to centralization.

Would it work better if we make the 0.5 fee case asymmetric? If the body is not published, the proposer gets 0.5 fee but the builder pays 1 fee - half of which is burned. This would shift the scales and hopefully makes collusion unprofitable. Proposer1 gets 0.5 but the colluders lose 1 so they end up with 0+profit (0.05 in the example above). As long as profit < 0.5 fee, collusion seems unprofitable.

In reality this attack seems unlikely because the colluders make 0.55 (and the attacked proposer 0.5) but a honest pair (proposer3,builder3) make 1.05. However, the possibility of this attack means that proposers will prefer to work with trusted builders (be a part of a honest pair) to avoid getting into the 0.5 situation, and that violates Untrusted builder friendliness.

Another concern around the symmetric 0.5-fee is that it makes stalling the chain cheaper than advancing it. A malicious block builder can always bids the highest fee, knowing that it will never publish the body so it will end up paying 0.5 of its bid while triggering the default “zero-proposal” for the slot.

Honest block builders attempting to advance the chain must outbid the malicious one, and they end up paying the full fee.

Hence, advancing the chain costs twice as much as stalling it.

Making the 0.5 fee asymmetric as I suggested above (builder pays 1 fee, proposer gets 0.5, and 0.5 fee gets burned) seems to even the costs.

Doesn’t has the same drawback because a proposer is not limited to a single block header.

However, it gives the proposer more power to choose between blocks after they’re already known. I don’t see a DoS/griefing opportunity but a proposer colluding with a group of builders could always select the block that makes most sense based on off-chain events. E.g. an oracle is going to publish a piece of real world information in the next block, not known when the current block is built, but known by the time it is proposed. Two builders send conflicting blocks before the information is known, one assuming that the oracle will return 0 and the other assumes 1, and then their colluding proposer chooses the “winning” one 2 seconds later.

A proposer could have done the same by itself before this proposal - proposing only when the oracle information is known, but now that we separate builders from proposers, we don’t want collusion to be profitable. Collusion between builders and proposers becomes the winning strategy due to new information becoming available during the period between building and proposing.

It seems like a corner case that won’t happen too often, but it still makes collusion the more profitable strategy, leading to potential centralization.

It doesn’t mitigate the collusion above, because the fee is paid from the builder to the proposer, which are actually the same entity. That is, unless the fee is burned like in EIP 1559. If the fee is burned then it should mitigate this collusion, just as long as the profit from the collusion around the oracle result is lower than the burned fee.

N-slots exclusion penalty could work better since the combined entity actually takes a loss. Obviously, if the gains from choosing the winning block and not publishing the losing one (based on oracle information) is sufficiently high, no mitigation would work. We could add some sort of slashing for not publishing the body, but that may be too harsh because unreliable connectivity could also lead to that. N-slots exclusion seems to strike the balance.

If we go with the N-slots exclusion, the block builder deposit needs a withdrawal delay, i.e. the deposit must remain locked for at least N slots after submitting a block. Otherwise it wouldn’t be Sybil resistance and will just lead to high churn rate of block builders.

This condition seems necessary if we add such a fee. Otherwise a fee reduces the incentive to run a builder (separately from the proposer- if fee is not burned, or at all - if fee is burned). If the submission fee becomes too high it could increase centralization by reducing the number of builders or encouraging them to collude with a proposer.

The combined approach, with fee based delay and not propagating lower-fee bodies after a high fee body was propagated, seems to solve most problems.

On a more general note, both issues I highlighted are centered around the profitability of collusion between builders and proposers.

Would it make sense to add a sixth desired property, “Collusion doesn’t increase profitability”? The 5th rule (Consensus-layer simplicity and safety) implicitly includes it, since the consensus layer already has that property, but there’s a subtle difference because we’re adding another component and want to prevent collusion with it as well, so it might make sense to make it explicit.


I believe the modern take (and the one relevant for Ethereum post-merge) is “Maximal Extractable Value” :slight_smile:

Realistically both ideas require consensus changes. For example in idea 2 the slashing condition is best done in consensus for capital efficiency, to bypass gas complications, and for general simplicity.

As I see it idea 1 is clearly preferably to idea 2:

  1. bandwidth—Idea 1 requires the proposer to receive many bodies from builders. This is almost a non-starter for weak proposers and presents a DoS vector.
  2. latency—Idea 1 has three half-rounds of latency (builders publish headers, proposer publishes header, builder publishes body) whereas idea 2 has four half-rounds of latency (builders publish headers, proposer publishes commitment, builder publishes body, proposer publishes header and body).
  3. simplicity—Idea 1 avoids unnecessary complications such as the slashing condition in idea 2.
  4. builder-friendliness—Idea 2 allows the proposer to profitably steal from the builder when MEV is greater than the slashing penalty. Note that MEV has a significant spiky component (e.g. flash crash liquidations, contract hacks, token launch front-running).
  5. proposer MPC-friendliness—As noted idea 1 has a trivial blackbox (i.e. without seeing bodies) MPC-friendly header selection algorithm whereas idea 2 opens the door for more sophisticated (and less MPC-friendly) non-blackbox selection algorithms that analyse the content of bodies.
  6. proposer power minimisation—As a general rule of thumb we want to minimise the discretionary power of proposers. Idea 1 is preferable in this regard because header selection is blackbox, without seeing bodies.

Isn’t this just saying that the colluding attacker makes a 0.55 profit instead of a 1.05 profit from being honest (so they sacrifice 0.5 from the attack) and they make the honest proposer lose 0.5 in the process? So this is a griefing attack; it’s not actually in the colluding attacker’s (direct) interest to do this, and so one should expect that it should not happen often. Or am I misunderstanding something?

Would it make sense to add a sixth desired property, “Collusion doesn’t increase profitability”?

I was covering that under weak proposer friendliness: the mechanism should not favor proposers that are engaging in spooky advanced strategies that require ongoing effort to figure out, which collusion definitely is.


If profit was consistently 0.05 then yes, but once MEV profits surpass 0.5 fee this behavior becomes profitable, e.g. when there’s a highly profitable frontrunning opportunity. MEV profits will fluctuate, and often stay below this threshold. The problem is, once players start engaging in this behavior (at a time when it makes sense because MEV > 0.5) all the proposers will want to defend themselves after getting 0.5 a couple of times. The simplest defense will be to work with a trusted pool of block builders. At that point even if MEV profits drop back to 0.05 and the collusion attack stops, the proposers already centralized the builders into trusted pools.

In other words, as soon as MEV profits cross the 0.5 threshold once, the network switches to a more centralized state and there’s no trigger to ever switch it back.

Am I missing something that would stop this from happening?

Right. No additional property is needed.

1 Like

In the long run, if we consider the colluding players to have the option of being honest and earn 1.05 or malicious and earn 0.55 but making honest proposer loose 0.5, I think that it could be a profitable strategy nonetheless because the honest proposers will have two options: either quit (in case the effort is less than the profit) or stay (in case the whole operation is somewhat (i.e., not earning the full 1.05) profitable. Both options will lead to centralization is some sense. For the first option, it will leave the floor to malicious players (and I think this is a risk that might need a little bit of thinking if I am not mistaken. For the second option as @yoavw mentioned, the honest proposers will choose to work only with the builders that they trust which is a profitable and safe strategy for them where I don’t personally see an incentive for them to quit their trusted area and work with unknown builders.

So, as I see it (and of course I might be highly mistaken), there will be groups of proposers that work only with the builders in their whitelist) which makes the whole process not fully decentralized.

On the top of my head, I think if we could see it (or adapt it) to a non-zero sum game (where the malicious proposers do not really affect the honest ones) by proposing some punishment mechanism (that I don’t have any idea now of how to integrate it) would fix many issues.


Good idea. Make it so that proposers are never affected by the attack, but increase the cost of collusion. Maybe something like this:

Proposer always gets 1 fee, regardless of whether the slot is successful or bad. Builder pays fee*X (X>=1). Upon a successful slot, builder is refunded (X-1)*fee. On a bad slot, (X-1)*fee is burned without refund.

X>=1 is a decaying function of the number recent bad slots, such as 1+K*moving_bad_slots_ratio.

When no one attacks the network, X is close to 1. When someone starts implementing the collusion strategy, X keeps increasing until the attack becomes unprofitable and subsides. This way the attack becomes increasingly more expensive but proposers remain unaffected so they don’t centralize. The only victim would be a honest builder who had the misfortune of losing connectivity in mid proposal during an ongoing collusion attack. That seems rare enough for the network to live with.

The downside of not letting the attack affect proposers is that it opens up a vector for malicious proposers to slow down the network by always publishing slots with the zero proposal, claiming that the body was not published. As long as it’s a minority it doesn’t matter, but we need to think whether they could have an incentive to do so collectively. Hopefully it won’t be an issue since the proposer still has more to gain by publishing a successful block.

Why would it be profitable? Is it because the MEV from the first block carries over into the next block, which is controlled by a friendly proposer? If so then ok that makes sense.

I agree with this part. I think that decoupling the profits of both malicious and honest players would solve the issue. Another point, even the profit itself should be dynamic and not consistent (e,g,. 0.05 as @vbuterin suggested). It can be calculated depending of the “moving_bad_slots_ratio” (or another metric) as well.

Another idea that would be hard (it will put an overhead in the network), we can borrow the idea of staking here, so before any operation proposers should stake an amount Y where Y > profit. This staking will be valid for the duration of the operation (e.g., till publishing the body of the block). Then, everyone gets paid and Y + profit gets returned to the proposer. I don’t see any incentive for proposers to act maliciously, but I am not sure on the feasibility part (so it would still need some analysis in this regard).

Another point, even the profit itself should be dynamic and not consistent

One quick clarification: the 0.05 is not a hardcoded number, it was simply an example. In reality, the profit rate for block builders would be set by the market; I expect it to be low in a competitive environment.


If MEV carries over to the next block and it’s controlled by a honest proposer then the protocol achieved its goal despite the delay. But if the colluders that performed the 0.5 fee attack are also a large pool of potential proposers (think coinbase-sized staking farms), they have a relatively high chance to control the next slot. Their strategy in this case would be to stall slots by spending 0.5 until one of their proposers is selected.

This attack wouldn’t have been possible with the pre-separation protocol because even a large staking farm can’t stall the chain effectively. With this change a stalling attack might become a viable strategy during high MEV circumstances.

It seems that any design that enables stalling attacks would violate Weak proposer friendliness by favoring large pools. Does it make sense or am I totally off the mark here?

The only way I see to mitigate this attack in the context of idea 1 is to increase stalling cost exponentially with each bad slot:

  1. builder offers fee but sends conditional_fee = fee*2^num_of_consecutive_bad_slots
  2. proposer receives fee in any case
  3. burn conditional_fee - fee if slot is bad
  4. refund conditional_fee - fee to builder if not burned

I think the fee for a stalling attack might end up being quadratic naturally. The reason is that as more blocks come up, the attacker would need to keep outbidding legitimate block builders, and legitimate block builders would be making higher and higher bids as the number of unclaimed transactions piles up. So the per-block cost to the builder would be increasing in time (linear in time if demand is constant and either (i) there was no block size cap or (ii) elasticity = 1), and so the total cost would be something close to quadratic.

The protocol doesn’t control the profit and can’t even calculate it. This is MEV profit and may only become apparent in hindsight when the block builder’s MEV strategy is analyzed. For well known strategies the profit will be low due to a race to the bottom, with block builders competing by offering a high fee for their block to be included. Basically a MEV auction. For new strategies or ones that can’t be replicated easily (e.g. requiring large holdings of an illiquid governance token), profit can be very high for a short time.

1 Like

Yes. I was just thinking about that too. Fees will keep increasing linearly but the attacker has to pay 0.5 * the sum of fees for the stalled slots. It’ll only make sense in rare opportunities of knowledge-asymmetry such as when implementing a new MEV strategy that others haven’t identified yet, to prevent frontrunning its first shot.

But any reason not to make it exponentially expensive based on the length of the stall? Under normal conditions a stall of more than 1-2 slots seems unlikely, so the exponential cost will only kick in during an attack.

I would say the main reasons to consider staying away from that are:

  1. Just plain old protocol simplicity (increasing complexity introduces greater risk from unknown-unknowns)
  2. Relying too much on lose-lose games (where there are penalties that do not correspond to rewards) is risky because it creates an incentive to circumvent the protocol (eg. imagine a few rounds of stalling happened, and there’s a risk a block will not get included due to network latency; proposer+builders would benefit from moving over to some layer-2 super-protocol)
1 Like

I think (correct me if I am wrong) that the incentive to move to a layer-2 super-protocol would be a strategy in any case, so it doesn’t need a special event (e.g., few rounds of stalling) to happen.

If we separated the rewards from penalties, it would be a very rare case to have few rounds of stalling because its cost would be very high (quadratic or exponential as you explained).
Thus, attacking the protocol is a loosing strategy (unless the malicious player does not care about incentives but it only cares about taking down the protocol).

I think that the attack on the protocol and making builders quit because of unprofitable auctions have the same overall effect. However, the latter has a more probability of happening. So, we need a tradeoff between the two.

1 Like

Agreed. We should go with the simplest protocol that satisfies the five properties. I hope the market/fee based mitigation can achieve it.

I don’t know if it would come to that, since each proposer would probably run its own local builder to handle cases where it gets selected and no one else submits a block. Whether it creates a sub-optimal block or just proposes an empty (but valid) block doesn’t matter. Either way it breaks the stall chain and resets the conditional_fee. Long stall-chains will be too rare to justify developing a layer-2-super-protocol when it’s easy to just break the chain with a local builder.

But you’re right - keep it as simple as possible as long as it satisfies the requirements.

BTW another mental shortcut for thinking about Idea 1, with the attester-enforced body acceptance delay:

Think of it as a blockchain where there are “full-slot” and “half-slot” blocks; a block with slot n+0.5 can follow a block of slot n. Full-slot blocks are “regular” block-proposer-proposed blocks, and contain consensus data and the hash of a body. Half-slot blocks contain the body, and can only legally contain the body whose hash was in the preceding full-slot block. The significance of calling the body a separate block is that this makes it clear that the time limit for accepting the body should be half a slot later.


Could you apply a delay to block building instead of the block attestation?

  1. Block builders make bundles, evaluate a VDF with a hash of the bundle as the input. A bundle header contains a commitment to the bundle body, the payment to the proposer, a signature from the builder and the VDF proof. The builder publishes both the bundle and the header.
  2. The proposer chooses the bundle+header that offers the highest payment, and is valid (including verifying the VDF proof). They sign and publishing the bundle+header.

If the VDF takes greater than half the slot time to evaluate then the proposer won’t have time to re-arrange the block and create a new valid header. This will also mean that the proposer wont see any blocks until they’re half way through the slot.