A practical proposal for Multidimensional Gas Metering

This proposal is co-authored by myself and @dcrapis

We would like to thank @soispoke, @Julian, @adietrichs, and @vbuterin for the discussion, comments, and review.

Motivation

In Ethereum, we use gas as a measure of two important concepts for the EVM. On one hand, gas is used to measure the consumption of resources by transactions. The more a transaction uses, the more it pays in transaction fees. On the other hand, gas is also used to control resource limits and ensure that blocks are not overloading the network. Currently, validators enforce a limit of 36 million gas units per block. If a block needs more than this limit, it is considered invalid.

We can think of the first use of gas as “transaction pricing” and the second as “block metering”. Because the same metric has always represented both concepts, it is natural to think of them as interchangeable. However, we argue that we can consider them separately, and in fact, there are gains to be had by doing so. More concretely, we can introduce a multidimensional metering scheme that accounts for the different EVM resources while maintaining the pricing model unchanged.

But what is the benefit of doing this? First, using a multidimensional scheme to meter resources allows us to pack blocks more efficiently. In these schemes, even if a block has already reached the limit of one resource, we can still add more transactions to that block if they do not use the bottleneck resource. For example, a block that is already “full” from call data could still include computation-intensive transactions that do not spend gas on call data. This blog post from Vitalik explains well why the current one-dimensional scheme is not optimal.

Based on our previous empirical analysis, a four-dimensional metering scheme that separates compute, state growth, state access, and all the other resources would allow an increase of ~240% in transaction throughput, assuming an infinite demand of historical transactions.

Second, even though full multidimensional pricing enables more flexible pricing, this flexibility comes at the cost of a worse UX experience for end-users and developers (as they now have to deal with multiple base fees and gas limits) and the risk of added incentives for bundling transactions to save on transaction fees. Moreover, EVM constraints such as in sub-calls, make the implementation of multidimensional pricing technically challenging. In other words, it is unclear whether the advantages of multidimensional pricing outweigh the potential issues. The tradeoff is much more obvious when changing the metering scheme alone.

The New Metering Scheme

We propose multidimensional metering as a change to the way we account for gas used in a block. This allows us to fully utilize the gas limit for each individual resource, while still being in the safety limit, and yields significant throughput gains without changing the gas limit. Moreover, the transaction UX and the structure of the fees that are charged to users remain unchanged.

The new metering scheme introduces a new variable called block.gas_metered. During transaction execution we meter the gas used along each resource dimension (compute, state, access, memory, etc), say (r_1, ... r_k). Then we compute

block.gas_metered = max(r_1, ... r_k),

while the formula for the current definition of gas used is

block.gas_used = sum(r_1, ... r_k).

From the user’s perspective everything stays the same. A transaction still has a single tx.gas_limit and pays according to the actual tx.gas_used. The tx.gas_used and tx.gas_limit are still used to check the transaction’s “out-of-gas” condition (if during transaction execution tx.gas_used exceeds tx.gas_limit the transaction is reverted).

At the block level, block.gas_metered replaces block.gas_used in (1) block validity condition and (2) EIP-1559 fee update calculation.

LIMIT = 36_000_000
TARGET = LIMIT // 2

# sender is charged based on sum of resources
def compute_price_for_usage(tx_bundle):
   return basefee * sum(tx_bundle)

# block limit is enforced on the highest individual resource
def is_valid_consumption(block_bundle):
   return max(block_bundle) <= LIMIT

# basefee is updated using the highest individual resource
def compute_new_excess_gas(block_bundle):
   return max(0, excess_gas + max(block_bundle) - TARGET)

This proposal has the following properties:

  • Increase resource utilization
  • Maintain safety limit for each resource
  • No changes to UX

This change is relatively simple compared to other multidimensional pricing approaches, and it yields significant improvements with a modest increase in complexity. In particular, optimal block building becomes more difficult, but simple heuristics can still be used to produce blocks. Protocol changes involve (i) introducing a gas cost schedule for resources other than compute and (ii) metering the gas used per resource. Note that since resources other than compute are used by a relatively small number of opcodes, this will only involve increasing the number of gas cost parameters from ~100 today to ~150 to account for all other resources.

Beyond yielding significant gains directly, this improvement is also an important stepping stone to unlock future gains from multidimensional pricing.

Example

The block.gas_target and block.gas_limit stay unchanged, 18m and 36m respectively. Suppose we get a block where the demand profile for each resource, measured in million gas units, is (18, 9, 9, 6, 3), where each dimension in the vector is the gas attributed to a single resource. This block would be invalid under the current specification since sum(18, 9, 9, 6, 3) = 45 exceeds the gas limit by 9 million gas units. With the new proposal the gas metered is max(18, 9, 9, 6, 3) = 18 which makes the block valid and also right at target, so the gas fee will not change. Suppose we then get a block with high load on the second resource (18, 30, 9, 6, 3), block.gas_metered = 30 million gas units. While still being a valid block since it is bellow the limit, the base fee will increase as it is above the target.

Next steps

Two key questions need to be answered to specify this proposal fully. First, we need to define the resources we want to track. The original gas model was designed to account for the following resources:

  • Compute: the execution/CPU cost, representing the computational work performed during contract execution.
  • Memory: A transient, expandable area used during execution for temporary data storage, cleared after the transaction completes.
  • State: The current snapshot of all account balances, contract storage, and code maintained in a Merkle Patricia Trie.
  • History: The complete record of transactions and state transitions stored on-chain, which enables nodes to reconstruct past states. History can be pruned, which is a key difference from state.
  • Read / Write Access: The amount of data (proof components) required to verify a state read / write from the Merkle trie, impacting verification cost and efficiency.
  • Bandwidth: The cost of sharing the block content, i.e. block size in Kb.
  • Bloom Topic: A 32-byte hashed value from event topics incorporated into a 2048-bit bloom filter for efficient log filtering and query acceleration.

The question is whether the new metering system should track these same resources or not. Should any other resources be added (e.g. proving costs)? Should some resources be combined to simplify metering? Based on our previous analysis, there is a clear gain from separating at least compute, state, and access, as these are the bottleneck resources in our data. However, we may want to isolate more resources to future-proof this proposal.

The second question is how to properly split the total gas cost of each EVM operation into the various resources. Once we have defined the resources, we can perform a benchmark across the various EL clients to measure the resources used by each EVM operation (opcodes, precompiles, etc.). Of course, this requires defining how to measure the usage of the resource. For instance, for compute, we can use the execution time, while for bandwidth, we can track block size in Kb. Once we have the resource usage for each operation, we can set the safety limits for each resource and then convert them to gas units.

These benchmarks connect well with current efforts to increase the gas limit. Here, client teams are already setting up benchmarks and tests to analyze network performance. A good example is the Gas Cost Estimator Project, which implements a comprehensive benchmark across different clients focused on computing costs. In addition, this work is closely tied to recent repricing efforts, such as EIP-7904 and EIP-7883.

3 Likes

If I understand correctly there will still be a single gas market price but four different block-level gas limits. Isn’t there a risk that the network will start preferring transactions with ~even resource usage and discarding transaction with imbalanced resource usage? That way the nodes can fill up the four gas limits more efficiently and get more rewards from gas fees.

Simplistic example: let

  • L_A = 100 be the block-level gas limit for resource A,
  • L_B = 150 the limit for resource B,
  • U_A the amount of gas consumed by a transaction for resource A,
  • U_B the amount of gas consumed by a transaction for resource B.

Won’t all nodes of the network be encouraged to prefer transactions that are as close as possible to U_B = U_A * 1.5 ? Or rather, why would a block proposer be okay with including many transactions with U_B = 0, for example? If it does that, there’s a risk it won’t be able to fill up the limit for resource B.

Unless I’m missing something, it looks to me like this scheme would encourage the network to prioritize transactions unfairly (there’s no reason why imbalanced transactions should be preferred over more balanced ones).

OP Stack uses a similar approach to meter demands on L1 data availability throughput. The biggest downside is that when the meter’s limit is reached, our blocks fall below the gas target.

These blocks don’t currently optimize for profit beyond ordering by priority fee, so it’s possible to get closer to the gas target with heuristics for metered resources. But even for profit-maximizing proposers, optimal blocks can fall below the gas target for extended periods.

When this happens, users must outbid each other’s priority fees in a first-price auction. Many applications do not expect to actively participate in priority fee auctions because EIP 1559 has been so successful at eliminating them, so they experience these periods as a denial of service. (The base fee also plummets to zero during these periods, so even when the meter is no longer binding, it takes time for the priority fee auction to end.)

It sounds like this proposal differs from OP Stack’s metering by including the meter in the 1559 calculation to avoid affecting the fee market. The most congested resource would determine whether gas usage is above or below target, which would prevent the priority fee auctions we see. Does that sound right?

Even though you got the idea mostly right @71104, I need to make a clarification. There is still a single-block gas limit in place. In other words, in your example, L_A=L_B. However, this does not change the effect you describe.

There are two types of blocks to consider here. First, there are the “normal-load” blocks, where block utilization is approximately 50%, and the “low-load” blocks, where block utilization is significantly less than 50%. This accounts for the majority of blocks (approximately 90%, based on our historical analysis). In these blocks, there is no concern about transactions optimally balancing the various resources, as block builders can include all valid transactions.

Second, there are the “high-load” blocks, where utilization is close to 100%. From the same empirical analysis, these blocks occur ~10% of the time. Here, since block builders need to decide between certain transactions to include versus others, there may be an incentive to select transactions that better complement those already included. However, block packing may not be the primary criterion to make this decision. They also need to balance MEV profits and fees collected in this complex optimization problem.

So, even though we could see some incentives for more efficient block packing, these will not be prevalent enough to materially change the block composition. Interestingly, we also found that the resources used by high-load blocks are already different from the other blocks today. You can see this in the same empirical analysis.

Correct, @niran. In this proposal, we use the same formula for the block validity condition (i.e., how full a block is) and the EIP-1559 update rule. In both cases, we use the gas spent by the bottleneck resource. So, if the bottleneck resource exceeds the target, the base fee will increase in the next block. For the base fee to decrease, all resources need to be below the target.

But nobody knows whether the current block will be a high-load one until it’s proposed, so it would be reasonable for all nodes to prioritize evenly balanced transactions regardless.

But even if the nodes can somehow predict whether or not the transaction rate will spike in the current slot I don’t think it’s okay to prioritize transactions unfairly ~10% of the time.

The fact that the optimization problem is further complicated by MEV is not a good excuser. If anything, this proposal creates a new MEV criterion to optimize for, making the problem worse.

Thank you for this proposal, I have 2 questions:

  1. For performance, currently state access is most detrimental to block building, with this new design, we could potentially allow a block being built with 36 (block_limit) gas consumption state access, which is drastically higher than before (since we can still have other resource consumption as well). Curious if there’s any mitigations to this other than raising state access gas cost drastically
  2. With the new design, somebody could potentially crowd out all the other actions by swarming the block with just 1 mispriced resource group, for example, if compute is priced really cheap, somebody could send all heavy compute transactions, that will cause base fee to increase, potentially pricing out other types of transactions. Is there any potential mitigations before having true multidimensional pricing?

the per resource block limit will be similar to today, not drastically higher

the same problem is there today, and this proposal does not make it worse. full mitigation is the full multidim pricing as you say.

So the day this proposal goes live, all Ethereum users will suddenly be able to consume four times as much gas? Is that safe?