Another simple Gas fee model: The "Escalator Algorithm" from the Agoric Papers

Sorry if I’m missing something, but seems like there’s a hard ceiling, the blockchain bandwidth, which prevents endless gas limit escalation.

1 Like

It’s these sorts of assumptions seem extremely dangerous to me. Without the capacity to do more than hand wave about the impact of the changes, I don’t think it much matters which proposal we use. I’m far more concerned about having a sane model to test our assumptions against than what the proposed changes are… these changes seem extremely difficult to reason about without simulations of some sort.


I can definitely see the need for / value of “opting in” to defer transactions for applications/use cases that don’t necessarily need things to be mined ASAP. What about a hybrid approach where we combine the good parts of each? Allow for specifying those custom fields for applications/users that want to, and for the rest have the dynamically adjusting value like in 1559.

It’s gas price escalation, not gas limit escalation that I am talking about here.

these changes seem extremely difficult to reason about without simulations of some sort.

What kind of simulations would help here? Getting a reading of the probability distribution of transactions that get included in the chain and their gas limits, so that we can try running it against EIP 1559 and see how often blocks are full? I would certainly love to see the output of that!

1 Like

Oh right, I can’t believe I still make that mistake sometimes.

Yeah, that may be very helpful, although it’s hard to know what people would have been willing to pay. Maybe someone can suggest a reasonable way of approximating that from what was paid.

I guess I may be an outlier here where gas price efficiency/volatility seems like a feature to me, because it allows prices to reflect actual demand, but I’m happy to separate opinions like that from the things we can build concrete data on, like possible simulations.

because it allows prices to reflect actual demand

I disagree that EIP 1559 makes this no longer the case! The BASEFEE continues to give readings of what the demand level is. In fact, I think EIP 1559 strengthens the extent to which observed prices reflect demand, because it solves the attack where miners manipulate observed fees by including high-gasprice transactions to themselves (as with EIP 1559 that attack becomes very costly). Plugging that hole also allows smart contracts to automatically use gasprice info, enabling eg. gas price derivatives.

Probably the one piece of information we lose is how much people are willing to pay to get their tx included 1 min earlier, but then delaying people’s transactions by 1 min is almost always a pure social waste in the first place so that doesn’t seem like too big a deal.

Maybe someone can suggest a reasonable way of approximating that from what was paid.

If you want estimates of how the demand curve works, there is this analysis from exogenous shocks to the gas limit: Estimating cryptocurrency transaction demand elasticity from natural experiments

1 Like

Just to jump in here: My impression is that the misunderstanding comes from two different views of the capacity of blockchains. @danfinlay assumes that the main capacity is a hard limit, say bandwidth/computation limit of nodes at any instant in time. However, EIP1559 assumes that the limit is more to do with the overall size/computation of the chain, and a temporary increase of resource usage even by a factor of 3-4 is acceptable if the average bears this out. These two viewpoint impose different cost models (instant vs average prices).

So EIP1559 will ensure a good pricing model for all those participants that want their transactions included immediately. This is great as long as it does not become too costly. In the context of Eth2.0, where more fixed block sizes (at least in terms of the data availability/custody story) will mean that some resources will be wasted, it would be very interesting if there is a model in which you can allow for cheaper transactions that would only be included in “block waste”, and are likely to be included somewhat delayed. My gut feeling is that it’s probably impossible as it would break the incentives of EIP1559, but maybe there is a clever way to circumvent this.

In the context of Eth2.0, where more fixed block sizes (at least in terms of the data availability/custody story) will mean that some resources will be wasted

This is why the block roots are stored as lists of chunks representing 128 kB each; this removes most of the inefficiency that arises from this.

I’m talking about the partially filled 128 kB chunks. If we could fill them, that would be almost “free”. Of course that’s difficult. Maybe one idea would be to have a transaction type that is always executed with a 10 block delay, at half the price. But how to get the rewards for block proposers right, that is the problem.

Revisiting this claim, as I think we talked past each other before, and I’ve re-read all your content on this issue:

I’ll concede that “most of the time” many algorithms may work fine, and so I think much of my motivation comes from how the protocol would perform under various types of strain (I recommend talking to wallet devs who supported users during the various 2017 booms).

If we imagine a large backlog of transactions whose submitters have widely varying preferences of the highest price they would pay, then I think it should be simple to prove that:

Under 1779:

  • The very highest bidders’ transactions may wait a number of blocks until the BASEFEE increases to a level that excludes other transactions, making the tip a sort of single-price auction within each block that reproduces all the problems of the current market but with the additional complexity of this one.
  • Once the price has increased to a point where the highest price transactions are cleared, and the BASEFEE is going to lower again, the lower-priced transactions remaining will need to wait additional blocks for the price to fall to be eligible for inclusion.

Under the Escalator algorithm:

  • The highest bidders’ transactions would be cleared as quickly as possible, and then the lower-priced transactions would be processed. No block lag to discover price, no unfull-blocks-with-pending-transactions, and only one or two parameters would need to ever be shown to the user (cap and max_duration).

Miners would still accept transactions with the highest tip first before the BASEFEE rises to new-equilibrium levels, so I don’t think the property that highest paid transactions get included first gets sacrificed.

  • Once the price has increased to a point where the highest price transactions are cleared, and the BASEFEE is going to lower again, the lower-priced transactions remaining will need to wait additional blocks for the price to fall to be eligible for inclusion.

What’s the model here? An instantaneous spike of transactions, way too large to fit in one block, with a wide distribution of fee levels? It’s definitely true that lower-priced transactions will have to wait, though I think you have to be careful with the analysis: while the gas/block will be <10m as the basefee climbs down, the gas/block will be 20m while the basefee is climbing up, and that would reduce the wait time for lower-fee txs by as much as if not more than the extra delay on the way down.

This feels like a salient point. What value does the base fee add over completely user-specified gas prices? The fee already dynamically adjusts via supply (block gas limit) and demand. The user just can’t adequately express the fee they wish to pay with a fixed price.

It seems like this desired stretch capacity could be achieved in some other way. E.g.: You could allow miners to include transactions in excess of the block gas limit by burning the fee. This would effectively compensate other miners for their CPU by deflating ETH.

1 Like

By the way, I’ve posted this as an EIP here:

Yeah, my original paper that proposed EIP 1559 was very explicit about this: it had as a key assumption that the costs of shifting some capacity between T and T + 5 minutes or T - 5 minutes are not high. The main argument for this is basically:

  1. Uncle rates are at <10% and block processing times are well under 1 second, so we have a lot of room to go higher; the reason we don’t is because of other costs (eg. centralization risk of higher uncle rates, state size…), that are largely linear in block size.
  2. The Poisson distribution of blocks creates accidental usage spikes already; eg. we get a doubling of capacity for 5 minutes something like once a week. And we survive those just fine.

E.g.: You could allow miners to include transactions in excess of the block gas limit by burning the fee.

So basically have an EIP-1559-style basefee that only starts kicking in for blocks with >=10,000,001 gas? That seems unlikely to be correct; what’s the rationale for why the marginal cost to the network of adding one unit of gas is zero below 10m but suddenly spikes up above 10m?

The user just can’t adequately express the fee they wish to pay with a fixed price.

The user definitely can express the fee they wish to pay; that’s what the fee cap is for.

1 Like

I wonder if it’s even true if it’s not about minutes but hours or days. At least at the moment, I don’t think that the bottleneck is current bandwidth at all; the bottleneck seems to be trying to sync nodes that have been offline for a long time. And they don’t even care if load is shifted by a day or two.

The reason I’m bringing it up is that there is the narrative that EIP1559 is useless because it would not have prevented fees spiking on “black Thursday” ( Of course, that’s not at all an argument against EIP1559 in it’s current form – but could we have allowed larger blocks for a few hours or even days in order to keep fees much lower?

In other words, is there a possibility to adapt the time constants in EIP1559 to make it smooth over linger timescales?

Oh hi, I didn’t address this. I think this is a mischaracterization of my concerns with EIP 1559. Rather than implying I believe in a fixed capacity blockchain, I think you could more accurately say I believe in a finite-capacity blockchain with occasional dramatic spikes in usage that exceed that capacity, during which users have transactions of varying value and varying urgency.

In my opinion, EIP 1559 does not gracefully account for varying urgency, and so during a high usage spike that exceeds the network capacity for a number of blocks (a scenario that is rare but happens, and when it happens there is heightened importance), some users with high urgency transactions will find themselves waiting for transactions with much lower urgency, unless they begin manipulating the tip parameter, which re-introduces much of the complexity that EIP-1559 is designed to avoid.

I cover this comparison in detail in the EIP I opened.

But you seem to engineer for this one very specific usecase (getting in high urgency transactions) without addressing everything else that EIP1559 does. EIP1559 actually makes sure that the amount of times blocks are full is very small, so this should be an exceedingly rare situation, whereas with your algorithm, this would still be the case most of the time (assuming a future network that is well used and so there are always low-value transactions that could fill up any “cheap” capacity).

It seems that your idea would be more of a possible extension of EIP1559: If blocks are full (as in actually 100% full), then you can add an escalating tip to your transaction to indicate it’s urgency. This seems to be much better than your proposal (but I don’t really know if it’s worth it because blocks being full for more than a few blocks can’t really happen in EIP1559 unless someone wants to pay through their nose for it).

Just to clarify: I did not author the escalator algorithm, it was authored in 1988 by Mark Miller and Eric Drexler. Miller is cited in the original Ethereum Yellow Paper for being a pioneer of the concept of smart contracts. So this algorithm is not authored in response to eip-1559.

In the EIP I wrote suggesting an Ethereum-version of the escalator algorithm I compare 1559, current gas auction, and the escalator algorithm under a variety of conditions, not just block fullness under volatile conditions. I responded in this case about volatile conditions because I was responding to a post that suggested block capacity was my primary motivation.

Just because there aren’t any transactions left offering the current price tier doesn’t mean there isn’t a huge backlog of transactions at a more common price tier. And at each price tier, all transactions offering it or higher are treated as peers, and no priority is given to any transactions for a higher price other than the tip, which is meant to be unrelated to urgency.

To me this seems like more than “just one situation”, this is about the fundamental role of transaction pricing in blockchains.

Well the problem is it doesn’t. In a future where Ethereum has grown to a large amount of usage, and with first price auctions (and the escalator algorithm is still a first price auction, just with automatic bidding), there is just no such situation as “blocks being half full”. There is always someone who would fill blocks up at a low enough transaction price, no matter what.

Compared to that, EIP1559 guarantees that blocks will almost always be half full and the marginal cost of another transaction is almost perfectly predictable. However, In this case, you write about EIP1559: “User can wait for a better price, but may need to resubmit re-signed transactions.” – this is wrong! The user quotes the maximum basefee they are willing to pay, but they only expect to pay the current fee determined by consensus. This means they do not have to resubmit in the case where the basefee rises.

So it seems to me that in the normal operating mode, EIP1559 is far superiour to your suggestion. The only advantage your algorithm has is if you have a spike of many completely full blocks (should never really be more than 10-20 blocks in EIP1559, actually, because the price would rise very fast with it), and you have a transaction that absolutely cannot wait for those 10-20 blocks. Then you may have to bid a higher tip, and I can see how it would be an improvement to use the escalator algorithm to automate increasing the tip.

I meant for transactions that have cumulativeGasUsed >= the voted on block gas limit, rather than entire blocks. This could be fair because the miners have voted on the block gas limit. A single miner adding transactions beyond this voted upon gas limit, increases the resources required for other miners to catch up, and thus should burn the fees to reimburse all other miners that must download and validate these larger-than-normal blocks.

This was just an idea about how to handle the surge volume, not a fully thought out proposal. I think surge volume should be addressed after this EIP, because it may not even be an issue if people can adequately express the urgency of their transactions.

By “the user can’t adequately express the desired gas price” I’m referring to the current state. Because a user cannot express the min/max fee they wish to pay and how long they would wait today, they often pick the highest fee they would pay in times of congestion to avoid getting into a stuck transaction state or having to resubmit transactions. They end up either hugely overpaying, or it’s too late by the time the transaction is included. This is the most pertinent UX issue that needs to be solved IMO.

First price auction already works great when there is less than spikey usage and for non-urgent transactions. The UX isn’t bad either–wallets either hide it completely and choose a reasonable gas price, or give you a slider. It’s only when transaction volume is spiking that the UX degrades into something unusable (arbitrary inclusion times, dropping from mempool, resubmitting with different gas prices). So it seems wise to try to address this particular situation with a simpler (less different) solution before completely changing how gas prices work for a bevy of other improvements which may or may not work how we expect.