I wrote a notebook introducing simple simulations of EIP 1559, the anticipated fee market proposal by @vbuterin, @econoar and now many others. The notebook is here.

EIP 1559 was discussed quite a lot on other forums (see links section below). I wouldn’t want to add to the noise and start yet another thread, so I will focus this post on the approach of the notebook above, the next steps and some literature I am looking at. The notebook also prompted discussion on Twitter that would fit better here it seems.

The notebook

The simulations assume some demand comes in between two blocks, is either included in the block or not, sometimes stays in the mempool and sometimes not. I increasingly add complexity to the simulation but stop short of having “smart users” who act based on the current market or “smart producers” who strategically try to improve their payoffs.

Next steps

Real demand is not that simple. Users look at the current market conditions before deciding how to set their fees, so we need to model this behaviour (there was an interesting post a few years ago by Vitalik). If clients set the parameters for them, this is also something we want to model. Looking at historical prices may help identify the processes at play. We want to be able to express time-preferences.

Block producers may (will?) have more sophisticated behaviour than just include anything. More importantly, we need to make sure the mechanism is incentive-compatible (producers behave honestly, in a timely fashion). Formal analysis is probably more helpful here than simulations.

Integrating the simulation with Hive (suggested by @AFDudley). More generally, there is work to do on matching real conditions (down to the client implementation) with the simulations. There was chatter about an EIP 1559 testnet too over at the EthR&D Discord.

Mechanism tuning. EIP 1559 is “simple” (though I would argue, not that simple either) in that it looks like a fairly standard control mechanism: number go up, resistance go up; number go down, resistance go down. Variants exist, e.g., the escalator algorithm (suggested by @danfinlay).

I still have many unknowns and blind sides. The notebook was my attempt to build some intuition for myself and others, but many of the hard questions remain ahead. I’ll be looking into the modelling a bit deeper over the next weeks too.

This analysis is very interesting, I’m still making my way through it, but I’d like to comment on one observation that occurred to me:

The current analysis uses the fee_cap parameter to represent both the highest price a person is willing to pay, and their sense of urgency, what I might suggest is a missing parameter: time_preference.

To illustrate why this is an valid distinction to consider, we need examples of exceptions to these two parameters being correlated. Here are two, on different ends of the spectrum:

A user who has a high-priced auction reveal coming up is willing to pay a high price as the block approaches the reveal time.

When blocks are not full, and transactions are cheap, there may be small opportunities of arbitrage available. These would be situations where the max_price is very low, but the time_preference for those transactions is urgent.

Even without having good estimates of the frequency, we can still ask questions like:

Does a user with fee_cap=X and time_preference=Y achieve a satisfactory outcome given the block conditions and gas payment algorithm?

That’s one of the types of questions I tried considering in the Escalator algorithm EIP. I’m not sure how much it would benefit from numeric quantification, since we would be guessing the frequency of many scenarios, but I would like to highlight these considerations.

Anyways, thanks again for such an interesting analysis. It’s been very fun to read this, I hope more protocol proposals get cadCad analyses like these.

I really like all the cases you have isolated, here but also in your EIP. This is very helpful. I agree that getting a sense of what normal conditions are, or what is the “usual” mix of transactions would be sensible. I am hoping to collect data on this and try that analysis.

I’d like to be clearer what the transaction parameters are in the escalator. As I understand it currently, there is some price level p. For now say I only set two parameters, feecap and time_preference.

I submit my transaction at t=0, with fee p and a hash of the block at t.

If it is not included, at t=1, my transaction fee increases by (feecap - p) * t / time_preference. Meaning, if I really want to be included at least 1 block from now, I set time_preference to 1 and my bid jumps to my fee_cap in the next slot.

Is there a need for something like the initial_bid of the Agoric papers? In your own ethresear.ch thread you let users decide on the initial_bid. Using some kind of ambient price p (sort of like the basefee) this is one less parameter to specify (if we expect users to specify both feecap and time_preference). Of course it opens more question on how the dynamics of p affect the algorithm.

Segueing to a more formal model that could help compare the alternatives on hand. The presentation is more EIP 1559-specific but the representation of users and producers, metrics like social welfare etc. can be taken as is for a different mechanism – the only thing that changes would be strategies of users and producers.

In the last section I also give a result to show that under certain conditions, producer behaviour can be individually rational but not socially optimal. This is not unexpected however, these are the kind of things that were discussed on other threads. I haven’t necessarily seen it written up though.

EIP 1559 System dynamics

Given

b(0): the basefee at t=0,

c: a target gas per block,

G: the max gas limit per block, and

d: the basefee max change denominator d,

we observe two dynamical processes:

b(t) is the basefee at time t \in \mathbb{N}.

g(t) is the total gas used by the block at time t, g(t) \in [0, G].

Probably WLOG, we can assume that one user = one transaction[^1]. k runs over a countable (possibly infinite) set of users K, t denotes time steps, which we can assume discrete since “events” happen one block at a time. For some k, v_k: t \mapsto v_k(t) is the value function of the user, mapping time indices to the value. For all k, there exists t_k such that v_k(t) = 0, \, \forall t < t_k (t_k is k's wake-up time, when k starts caring for their transaction being included). After t_k, we assume v_k nonincreasing (the longer the transaction delay, the lower the utility for the user who submitted it). The value is assumed in the same unit as transaction fees, i.e., additive with respect to fees.

As an example, a simple time preference would have a single parameter \delta (discount rate) and an initial value v. When the user awakens, at t_k, and the transaction is included at t_k, the utility is v. If the transaction is included at t_k + 1, the utility is (1-\delta) v etc.

Another example is users who care for their transaction to be included immediately, or not at all. You could then have v_k(t_k) = v for some v > 0 and v_k(t_k + 1) = 0.

User k places their transaction a_k in the mempool at time t_k (i.e., earliest possible inclusion is in block at height t_k) and sets the transaction premium (\texttt{premium}(a_k)) and fee cap (\texttt{feecap}(a_k)).

Users observe the basefee b(t) at time t-1, i.e., know about b(t_k) once the block at height t_k - 1 is produced. The basefee is Markovian, i.e., only depends on its value at the previous block height. This makes it easier to reason about strategies for users and producers alike, as either can simply base their strategies on the value of b(t) alone (producers need to consider also the set of transactions in the mempool at t). The strategy space of user k is thus the set of mappings from the basefee and their utility function to their transaction attributes,

Let i : K \times \mathbb{N} \to \{ 0, 1 \} be the inclusion function, with i(k, t) equal to 1 if and only if the transaction of user k was included at time t. Note that i(k, t) = 1 \Rightarrow b(t) \leq \texttt{feecap}(a_k).

We have N block producers, commanding respectively fraction \alpha_n of the mining/staking power. The probability producer n is selected to create a block is \alpha_n.

Producers learn whether they will produce the block at height t right after the block at height t-1 is produced. The producer at time t selects from the mempool at time t a set of transactions respecting the block limit. We can get the state of the mempool at time t with:

i.e., transactions from users who are awake and who weren’t previously included. Given M(t) and b(t), the producer for time t chooses a set of transactions A(t) \subseteq M(t) to include in their block.

The producer function is p : N \times \mathbb{N} \to \{ 0, 1 \}, with p(n, t) equal to 1 if and only if producer n produced the block at time t in the canonical version of the chain. The long-term payoff of block producer n is given by:

All else being equal, the social welfare is optimised whenever transactions get in as early as possible and when basefee is lower.

The concern that producers can “manipulate” the fee has been expressed a few times, so we now give a simple example, showing that if a producer is chosen to produce two successive blocks, they can artificially delay inclusion to maximise their payoff. This is a prototype deviation, clearly a bit contrived.

A single producer

We have two time steps, 0 and 1. Let’s normalise the gas target to 1. The basefee is set at b for the first block and moves to b' for the second block. Transactions W(0) awaken at time t=0, with \mu(W(0)) = \sum_{a \in W(0)} \texttt{gasused}(a). Assume

All transactions are identical (this is a strong assumption, but the example works as long as there is money on the table).

\mu < 1 (all transactions fit in one block).

\texttt{feecap(a)} = f and \texttt{premium(a)} = \epsilon > f - b (the producer cannot get the whole premium at time step 0, since it is larger than the difference between the feecap and the basefee).

The payoff of the block producer who includes all transactions from W(0) at time step 0 is \mu \cdot (f - b).

If the producer does not include anything, the basefee drops to b' = b - \frac{1}{d}b = \frac{d-1}{d}b. Assume that b' + \epsilon < f, i.e., the basefee has dropped enough that the producer would get the whole premium instead of the difference between the feecap and the basefee. Including the transactions at time step 1 now yields payoff \mu \cdot \epsilon > \mu \cdot (f - b). In other words, it is more profitable for the producer to wait out another step. This is individually rational, but not socially optimal.

This holds true as long as the opportunity cost of not including transactions at t = 0 is lower than the profit from delaying them, and decreasing the basefee.

I have spent some time analyzing various sources about EIP-1559. This may help you with defining the simulations. Give me a shout if Nethermind team can help in any other way.

It’s amazing, I was looking at the PDF after the call yesterday. Your experience on the client-side is invaluable! For instance, transaction pool dynamics seem to matter more than I anticipated (it was also pointed out by @veoxin an issue in our repo).

I cannot click on the links though, specifically I was curious to check out “Measuring Eth1 network metrics” if it is public, I couldn’t find it.