On Ethereum Prover Market Design
^lol… also, there is one easter egg link to a movie franchise in the body of the post – happy clicking



\cdot
tl;dr; The current Ethereum scaling roadmap depends on ZK proofs to enable consensus under higher gas limits. If proofs become part of the validity condition on blocks, proposers who are unable to generate the proofs themselves must purchase them from a third party. We present a model of this proof acquisition game and compare the performance of two intuitive candidate mechanisms, proof lotteries and staked allocations, respectively. This post aims to introduce the problem statement and model before analyzing the tradeoffs in the market design space, and concludes by asking higher-level questions.
\cdot
by maryam & mike – august 13, 2025.
\cdot
Thanks to Akilesh and Julian for helpful comments on the draft of this post.
Contents
1. Why should we care about prover market design?
\;\;1.1. Model
2. Proof lotteries
3. Staked allocations
4. Comparing lottery and staked allocations
5. Concluding thoughts
1. Why should we care about prover market design?
Ethereum’s stated scaling endgame is to rely on ZK proof verification to replace transaction execution in the block validation flow. With the recent advances in proving speed (see ethproofs.org for current blocks being proved), the “endgame” discussion is being pulled forward into the present. Quoting from a recent EF blog post:
“The fastest and safest way to ship an L1 zkEVM is to start by giving validators the option to run clients that, rather than re-executing execution payloads, statelessly verify multiple (let’s say three) proofs generated by different zkVMs [emphasis ours] each proving different EVM implementations. Because proof verification is so fast and proof size so succinct, downloading and verifying multiple proofs is very reasonable and allows us to apply the same defense in depth as existing client diversity to zkVMs.”
– Sophia Gold. July 10, 2025.
With this goal, the process of procuring these proofs becomes a critical piece of the roadmap. Quoting from a recap of the recent Berlin interop,
“the biggest questions are censorship-resistant state sources and how to pay/align provers.”
– Nixo. June 19, 2025.
This post aims to start “feeling the elephant” of what an L1 prover market could look like and follows on from Maryam’s recent EthCC talk. Section 1.1 motivates and introduces a simple model. Sections 2 & 3 examine “proof lotteries” and “staked allocations” as two candidate mechanisms with different tradeoffs. Section 4 compares the two mechanisms and analyzes factors outside of cost that may favor one over the other. Section 5 zooms out and asks many high-level questions to conclude.
1.1 Model
To model an Ethereum prover market, we consider the perspective of the block proposer who has acquired (e.g., through MEV-boost) or built a block, but doesn’t have the hardware to generate the ZK proof themselves. Let V denote the economic value of this block to the proposer, which is a combination of the MEV, priority fees, and issuance they will earn if the block becomes canonical. The proposer thus faces the goal of acquiring a proof. Let p denote the payment the proposer makes to purchase a proof. They want to pay as little as possible, but if no proof is generated, they earn no rewards because the proof is a validity condition of the block.^1 The utility function of the proposer is then,
Because the MEV component of block rewards can be immense (e.g., a 212 ETH block from July 11), we consider V to be large (n.b., most MEV payments are relatively small, but for this post we use large V because we believe these blocks are the most likely to be attacked). The proposer then runs a procurement auction to get a proof. They can solicit bids, use randomness, and potentially slash provers who don’t fulfill their promise (e.g., by requiring provers to post collateral à la optimistic relays).
Formally, we consider a general two-stage game with n players (which we sometimes refer to as provers), each with a unit cost of generating the proof, c_i = 1. We then have the following phases of the game:
- Bidding: Each player submits a bid b_i to the proposer.
- Allocation: The proposer-specified allocation function x(\mathbf{b}) selects which bidders can deliver the proof.
- Delivery: Each allocated player decides on an action a_i \in \{0,1\}, which indicates if they generate the proof and deliver it to the proposer.
- Payment: The proposer-specified function p(\mathbf{b}, \mathbf{a}) defines the payment rule for each player, depending on their bids and actions.
The allocation and payment rules are chosen a priori and announced to the provers, who will accordingly act in the game; these rules and the resulting prover strategies may be randomized.
Importantly, we consider the case where the payment rule could be negative (to represent slashing) if a player is allocated but does not deliver. Players can choose not to participate by bidding infinity and ensuring they will not be allocated. The proposer’s utility function is then,
In words, the proposer makes a set of payments to each player and earns the reward V as long as a_i=1 for some i (at least one prover delivers). The proposer is trying to maximize this utility function. We now examine two candidate mechanisms and evaluate how they perform in terms of proposer utility.
2. Proof lotteries
The first mechanism is a lottery, which is exactly the model we studied in this recent post (where we consider the shifted objective and protocol cost minimization). Let each prover have the same cost of generating the proof, c_i = 1. We then have the following phases of the game:
- Bidding: There is no bidding phase.
- Allocation: There is no allocation phase (equivalently x \equiv 1).
- Delivery: Each player decides on an action a_i \in \{0,1\}.
- Payment: The proposer pays a single random deliverer a fixed prize X:
This payment rule is a lottery because one winner is selected at random. No stake is required because the mechanism only awards the prize if the work is delivered. Note that the mechanism can run in a single round, where the proposer pre-commits a prize and any prover can submit within a time window. As derived, with large n and analyzing the asymptotics in V, the proposer’s utility is maximized by setting a prize of size,
which results in an optimal proposer utility of
Note that this utility is achieved when all participants play the mixed strategy of delivering the proof with probability p^* \approx \ln V/n. That is, the proposer realizes this value in the symmetric Bayes-Nash Equilibrium. By setting a prize of about \ln V, the lottery mechanism pushes the probability of no participation down to about zero. That means with high probability, it succeeds at acquiring at least one proof and collecting V.
These results paint a relatively positive outlook for lotteries. Without requiring any slashing, the proposer can achieve a utility that only degrades by \ln V in expectation, regardless of the number of provers n. Recall that V is large and (V-\ln V) \overset{V\to\infty}{\sim} V; most of the value is kept by the proposer instead of paying the provers! Julian’s recent post gives a sketch of how the lottery could be enshrined in Ethereum and how the protocol facilitates the lottery on behalf of the proposer. But a lottery isn’t the end of the story; we now consider an alternative mechanism that relies on staking and slashing to provide economic security for proof acquisition.
3. Staked allocations
The model outlined in Section 1.1 is intentionally general and has a few features that the lottery mechanism doesn’t take advantage of. In particular,
- The proposer doesn’t use the bidding stage. Instead of allowing anyone to deliver a proof, the proposer can use the bidding phase as collateralized “registration,” and only allocate to a subset of registrants. This could reduce the amount of redundant work done.
- The proposer \rightarrow prover payments can be negative. This could be captured by slashing the collateral a prover posts if that prover refuses to deliver.
By taking advantage of these features, the proposer can run a “staked procurement auction” to achieve a higher utility. Succinct proposed an instantiation of this auction in their whitepaper.^2 Continuing with the model of c_i=1, consider this new game:
- Bidding: Bids are “quotes” that specify the payment the prover requires to generate a proof. Placing a bid requires a staked collateral of magnitude B (budget) escrowed by the protocol on behalf of the proposer.
- Allocation: The proposer chooses a random subset A of size k staked bidders to allocate to at random.
- Delivery: Each allocated player in A decides on an action a_i \in \{0,1\}.
- Payment: The proposer pays each deliverer a constant prize of 1,^3 slashes each allocated non-deliverer B, and pays everyone else 0:
Note that bid values are still not necessary in this case because each prover has a known cost of 1. Still, the bidding stage serves an essential role as bids provide collateral that represents the provers’ stake in the system. The analysis of the performance of these mechanisms under information asymmetry (e.g., c_i \stackrel{iid}{\sim} Q for some distribution Q) is the topic of further work we are conducting (see Section 5 for discussion).
Let’s consider the proposer’s utility under the staked allocation mechanism. Recall that the following is our original utility function for the general mechanism,
The proposer allocates to k players, and we consider two outcomes:^4
- all k deliver,
- none of the k deliver.
For case 1, the proposer pays each of the k deliverers 1 and realizes the value of their block V, so U =V-k. For case 2, the proposer slashes each of the k non-deliverers B, so U = k\cdot B.^5 The proposer can minimize its worst-case utility by equating the costs under each branch,
Using this equation, we can be sure that the worst-case proposer utility is exactly U =V-k^*.^6 As expected, this utility is highly dependent on the amount, B, that the proposer can ask each player to stake. Note that if B\geq V-1, then the proposer can allocate to k^*=1 prover and guarantee a worst-case utility of V-1. Compared to the V-\ln V cost of the lottery, this could be a considerable improvement. Staking isn’t a silver bullet, however, because as we mentioned earlier, V could be extremely large. Asking provers to stake on the order of 1mm USD to participate is likely too capital-intensive to hope for any prover decentralization. With lower values of B, the utility of running a staked allocation mechanism could deteriorate dramatically (e.g., if B=O(1), then k^* =O(V) \implies U = O(1), which is obviously far worse scaling than U=O(V-\ln V)).
4. Comparing lottery and staked allocations
OK, let’s take stock of the two mechanisms. The primary metric we are interested in is the worst-case proposer utility in each. Using the expressions derived, we take the maximum of the two utilities to decide which is better for the proposer:
Thus, we derive the simple outcome:
If the honest provers’ budget for staking satisfies B < \frac{V}{\ln V}, use a lottery; if B > \frac{V}{\ln V}, use a staked allocation; if B\approx\frac{V}{\ln V}), both achieve the same cost V-\ln V.
This is intuitive. If the staking budget is big enough relative to V, it is worth asking provers to stake and using slashing to provide insurance for the delivery. If the staking budget is small, then it is better just to run the lottery. Beyond worst-case proposer utility, there may be other design considerations that push in favor of one mechanism or the other.
- Capital cost: We don’t explicitly model capital costs. In practice, the staked allocations would probably require a larger payment to cover the cost of capital to collateralize the allocation fully. If capital costs are high, the lottery might be preferable.
- Permissionless vs. quasi-permissionless: The lottery is fully permissionless; at any time, a new prover can come online and deliver a proof to earn a prize. Staked allocation is quasi-permissionless (using the language of Andy and Tim’s paper) as the list of identities is known and fixed before the allocation. The lottery might be preferable to maintain full permissionlessness.
- Latency: The lottery requires a single round of communication because the proposer sets the prize before and deliverers are rewarded immediately after doing the work. Staked allocation requires two rounds of communication as the bidding/allocation phase needs to precede the delivery decision. Further, the lottery can be handled explicitly by the protocol in one consensus round. In contrast, the staked allocation would require either a trusted third party or a second consensus round to handle the collateral and slashing. The lottery might be preferable if latency, measured in rounds, is important.
- Price signal: Our model so far assumes prover costs are constant and known. In a more general setting, there is variability in proving costs across provers and across blocks. Under heterogeneity of provers, allocating the most efficient provers may significantly increase the proposers’ expected utility. The lottery achieves this indirectly since the ticket price is smaller for lower-cost provers, whereas staked auctions can use bids as an explicit price signal. This explicit elicitation is particularly useful if proving costs are variable across blocks. The lottery can only adjust the prize size based on the number of proofs submitted for past blocks (e.g., through a dynamic controller à la EIP-1559). The staked auction, on the other hand, can use bids as an explicit price signal for the current block. A staked auction might be preferable to facilitate price discovery under highly variable proving costs across blocks.
This list shows that there are many non-trivial real-world considerations to account for when designing prover markets.
5. Concluding thoughts
As alluded to throughout the post, the most general version of this model is our next immediate area of study. In particular, we aim to move beyond just these two candidate mechanisms to prove upper bounds on the proposer’s utility under a general adversarial model and the setting where prover costs are heterogeneous and sampled from a known random distribution c_i \overset{iid}{\sim} Q. We believe this theoretical work can serve as a meaningful contribution to the procurement auction literature under the threat of a liveness attack.
Beyond theoretical interest, there remain many practical considerations for prover markets:
- Do we even need prover markets? Because ZK proofs have a 1 of N honesty assumption and are cheap to verify, the necessity of a decentralized market of provers is called into question. This is similar in spirit to Dankrad’s call to relax short-term prover requirements. For example, if the endgame set of consensus roles separates attesting and block building (see Barnabé’s recent post), then the sophisticated builders will be able to prove their own blocks without introducing new trust assumptions. Alternatively, in a world with local building, relying on a set of centralized (highly reliable) cloud-based providers may be sufficient.
- Does enshrining ZK in the L1 actually scale the chain? We consider two distinct pieces of the question: the proof generation latency and the attester bandwidth. With regards to proof generation latency, “realtime proving”, which we don’t definitively have yet for 12s slots, is a prerequisite for enshrining ZK. On the other hand, shortening slot times is a competing proposal to improve Ethereum’s UX and reduce MEV. If proving speed is exactly linear in the gas size of a block and Ethereum maintains a constant gas per second, then the slot duration isn’t relevant. In practice, however, the proof generation time will probably have a constant overhead and a sublinear scaling in the size of the block (e.g., cutting the block size in half may only reduce the proof time by one-third). Continuing to shorten slot times (or increasing the gas per second of the chain) would require further compounding advances in ZK proving speed, which will become harder moving forward. Secondly, attesters will require extra bandwidth to download and verify the proofs, even if they benefit from no longer needing to download and run the full transactions themselves. The current estimates of a 300 KB proof of an Ethereum block imply that downloading the proof will consume non-trivial resources (especially if multiple proofs need to be verified per slot). While improvements in cryptography and consensus could reduce the bandwidth overhead for attesters, there still remains a tradeoff between downloading the proof and downloading the transactions.
- How are proofs paid for? Provers must be compensated for the work they do. If we enshrine proofs and keep the transaction fee mechanism intact, the proving cost will be split among users and validators in equilibrium. Each transaction will only be included if its MEV + priority fee exceeds its proving cost.^8 This means non-MEV-carrying transactions will have to pay their full proving costs, whereas high-MEV transactions might get subsidized, with the proving cost borne by the validator. As an improvement, we can update the opcode costs to reflect their different proving costs, the hope being that the increase in gas costs (multiplied by the number of proofs required to satisfy ZK prover diversity requirements) is offset by a lower base fee resulting from the higher supply. Most controversially, if such a transaction fee increase is not acceptable, proofs could be subsidized through issuance, pushing the cost of proving onto ETH holders at large. Things could get political here and there are many competing interests; this would require lots of attention and care.
.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.
❈ made with and ✎⭣ ☵ thanks for reading! – maryam & mike ❈
.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.
^1 Note that other participants in the block production pipeline may also be incentivized to create a proof. The obvious candidate is the block builders themselves, who will not realize any of their profit from making a block if it doesn’t become canonical. This incentive might be sufficient for builders to prove their own MEV-boost blocks directly (or perhaps handle the outsourcing themselves – see Kshitij’s post on this). It remains that the proposer has the most to lose from not acquiring a proof, leading us to model them as the acquirer. ︎
^2 While they do propose a particular staked procurement auction, they don’t specify how to set the staking budget B, nor do they analyze how the value of B impacts the protocol’s utility under an adversary. Further, they propose a proportional allocation rule that explicitly trades off efficiency with breadth of participation. ︎
^3 The prize of 1 is the minimum prize needed to make participation in the game ex-post individually rational, where there is a strategy with non-negative expected utility for each player. With the prize at exactly 1, the honest participant can ensure that their utility is at least 0 for participation if they deliver and are guaranteed a prize of at least 1. If they tie-break in favor of participation, this is sufficient. Alternatively, a prize of 1+\epsilon would make it strictly better to participate than to sit out of the game. ︎
^4 It can be shown that one of these two outcomes results in the worst-case proposer utility, so we can ignore other outcomes without loss when calculating the lower bound. The proof depends on the adversarial model, which we exclude from this post for simplicity. ︎
^5 This requires that the proposer slashes the non-deliverers and keeps that capital for themselves. This confiscatory action is slightly different than some slashings, which destroy/burn the tokens. Here, the collateral serves as “insurance” against an attack; the proposer should be financially compensated for the lack of participation by the provers who claimed to be available. This modality of slashing is similar in spirit to that of StakeSure. ︎
^6 Under the payment rule specified, we consider the proposer’s utility assuming worst-case prover behavior. One way to model this is as an attacker that controls a subset of provers and chooses their actions to minimize the protocol’s utility. The most damaging strategy this attacker can play is: deliver if there is at least one honest player in the set A; don’t deliver if there are no honest players in the set A.^7 This is a slightly apples-to-oranges comparison to the symmetric equilibrium we study in the lottery mechanism, because in that model, we assume all players are rational. With staking (and thus registration) and without an attacker, the proposer could achieve a cost of 1 by simply allocating to a single registered player and paying them 1 upon delivery; unfortunately, this allocation rule trivially fails under simple adversarial models. The most general version of this participation analysis squares the circle between these two equilibrium notions by unifying the model of the adversary and the resulting equilibria. That work is still in progress and much deeper mathematically, but we expect it to mirror the results presented here. ︎
^7 Even further, this “full insurance” allocation rule achieves the same utility lower bound even if the attacker controls every prover and not just the complete subset A. In our follow-up work, we achieve tighter bounds on the performance of staked allocation mechanisms by explicitly modeling the adversary’s power. ︎
^8 This might not be strictly true; it is possible that provers can exploit structure and redundancy between transactions, resulting in the cost of generating a proof being sublinear in the gas used by the transactions it proves. Thus, while in aggregate a set of transactions must cover proving costs, each transaction may not need to cover its proving cost in isolation (h/t Akilesh for this point). Still, for this discussion, assume it is approximately true. ︎