ZK API Usage Credits: LLMs and Beyond

Davide Crapis and Vitalik Buterin

A core challenge in API metering is achieving privacy, security, and efficiency simultaneously. This is particularly critical for AI inference with LLMs, where users submit highly sensitive personal data, but applies generally to any high-frequency digital service. Currently, API providers are forced to choose between two suboptimal paths:

  1. Web2 Identity: Require authentication (email/credit card), which links every request to a real-world identity, creating massive privacy leaks and profiling risks.
  2. On-Chain Payments: Require a transaction per request, which is prohibitively slow, expensive, and makes it difficult to obfuscate the full user’s transaction graph.

We need a system where a user can deposit funds once and make thousands of API calls anonymously, securely, and efficiently. The provider must be guaranteed payment and protection against spam, while the user must be guaranteed that their requests cannot be linked to their identity or to each other. We focus on LLM inference as the motivating use case, but the approach is general and also applies to RPC calls or any other fixed-cost API, image generation, cloud computing services, VPNs, data APIs, etc.

Examples:

  1. LLM inference: A user deposits 100 USDC into a smart contract and makes 500 queries to a hosted LLM. The provider receives 500 valid, paid requests but cannot link them to the same depositor (or to each other), while the user’s prompts remain unlinkable to the user identity.
  2. Ethereum RPC: A user deposits 10 USDC and makes 10,000 requests to an Ethereum RPC node (e.g., eth_call / eth_getLogs) to power a wallet, indexer, or a bot. The RPC provider is protected against spam and guaranteed payment, but cannot correlate the requests into a persistent user profile.

Proposal Overview: We leverage Rate-Limit Nullifiers (RLN) to bind anonymity to a financial stake: honest users who stay within protocol limits remain unlinkable, while users who double-spend (or otherwise exceed their allowed capacity) cryptographically reveal their secret key, enabling slashing. We design the protocol to work when API usage incurs variable costs, but it also directly supports the simpler fixed-cost-per-call as a special case.

We use a flexible accounting protocol in which each request sets a maximum cost per call up front and once the actual cost is determined at the end of the call the server issues a refund. Users privately accumulate signed refund tickets to reclaim unused credits and unlock future capacity even when the actual per-call cost is only known after execution. A Dual Staking mechanism lets the server enforce compliance policies while remaining publicly accountable.

ZK API Usage Credit Protocol

The protocol utilizes server refunds paired with refund accumulation and a proof-of-solvency on the client side. The model enforces solvency by requiring the user to prove that their cumulative spending—represented by their current ticket index—remains strictly within the bounds of their initial deposit and their verified refund history.

Anti-spam protection is enforced economically: a user’s throughput is naturally capped by their available deposit buffer, while any attempt to reuse a specific ticket index (double-spending) is prevented by the Rate-Limit Nullifier.

Primitives

  • k: User’s Secret Key.
  • D: Initial Deposit.
  • C_{max}: The maximum cost per request (deducted upfront).
  • i: The Ticket Index (A strictly increasing counter: 0, 1, 2, \dots).
  • \{r_1, r_2, \dots, r_n\}: A private collection of signed Refund Tickets received from the server.

Protocol Flow

Registration
The user generates secret k, derives an identity commitment ID = Hash(k), and deposits D into the smart contract. The contract inserts ID into the on-chain Merkle Tree.

Refund Collection (Asynchronous)
After a request is processed, the Server provides a signed Refund Ticket r = \{v, \text{sig}\}, where v is the refund value and \text{sig} is the Server’s signature over v (and potentially a unique request ID). The user stores these locally.

Request Generation (Parallelizable)
The user picks the next available Ticket Index i. They can generate multiple requests (e.g., tickets i, i+1, i+2) simultaneously.

The user generates a ZK-STARK \pi_{req} proving:

  1. Membership: ID \in MerkleRoot.

  2. Refund Summation:
    The circuit takes the list of refund tickets \{r_1, \dots, r_n\} as private inputs.

    • For each ticket, the circuit:
      • Verifies the Server’s signature.
      • Extracts the value v_j.
    • The circuit calculates the sum: R = \sum_{j=1}^{n} v_j.
  3. Solvency (The Credit Check): The total potential spend at index i is covered by the deposit plus the sum of all verified refunds:

    (i + 1) \cdot C_{max} \le D + R.

  4. RLN Share & Nullifier:

    • Slope: a = Hash(k, i).
    • Signal:
      • x= Hash(M),
      • y = k + a \cdot x.
    • Nullifier: Nullifier = Hash(a).

Submission

User sends: Payload (M) + Nullifier + Signal (x, y) + Proof.

Verification & Slashing
The Server checks the Nullifier in its “Spent Tickets” database:

  • Fork/Double-Spend Check: If the Nullifier exists with a different x (Message), the user tried to spend the same ticket on two different requests. Solve for k and SLASH.
  • Solvency Check: Verify \pi_{req} to ensure the ticket index i is authorized by the user’s current funding level.

Settlement

  • Server executes request.
  • Refund: Server issues a signed Refund Ticket r = (C_{max} - C_{actual}).
  • User adds r to their accumulator to “unlock” higher ticket indices for future use.

Server-Side Accountability (Dual Staking)

To deter API abuse beyond simple rate-limiting (e.g., violating Terms of Service, generating illegal content, or jailbreaking attempts), we introduce a secondary staking layer. For example, a user might submit a prompt asking the model to generate instructions for building a weapon or to help them bypass security controls – requests that would violate many providers’ usage policies. We would like a way to enforce such policies without giving the provider a straightforward way to profit from false positives.

Concretely:

The user deposits a total sum Total = D + S.

  • D (RLN Stake): Governed by the math of the protocol. Can be claimed by anyone (including the Server) who provides mathematical proof of double-signaling (revealed secret k).
  • S (Policy Stake): Governed by Server Policy. Can be slashed (burned), but not claimed, by the Server if the user violates usage policies.

The purpose of doing this, instead of simply setting D higher, is to remove the server’s incentive to fraudulently take away users’ deposits, which could be high depending on how high the deposit is.

Slashing Mechanism for S

If a user submits a valid RLN request that violates policy (but does not trigger the mathematical double-spend trap):

  1. Violation: Server detects policy violation in the request payload (e.g., prohibited content).
  2. Burn Transaction: The Server calls a slashPolicyStake() function on the smart contract.
    • Input: The Nullifier of the offending request and the ViolationEvidence (optional hash/reason).
    • Action: The contract burns amount S from the user’s deposit.
    • Constraint: The Server cannot claim S for itself, it is sent to a burn address. This prevents the server from being incentivized to falsely ban users for profit.
  3. Public Accountability: The slashing event is recorded on-chain with the associated Nullifier. While the user’s identity remains hidden, the community can audit the rate at which the Server burns stakes and the posted evidence for these burns.
5 Likes

Really cool approach. Was looking at similar directions but got blocked by cost and efficiency.

The system above is nice CS, but there is really no market for it.

It would be useful to conduct a basic market study to determine who actually needs this, otherwise it is a textbook example of a solution in search of a problem.

People who truly want privacy will likely choose on-site compute. There is little point in hiding identity if the prompts are not protected.

Inference costs per call are decreasing rapidly due to ASICs, so metering per request is unlikely to be something people will care about in the long term.

1 Like

I would prefer a service with ZK payment over one without.

I hate monthly billing for AI inference and refuse to use anything that isn’t metered. Electricity has scaled by many many orders of magnitude, and metering still makes sense at every step along the way.

I have looked into on-site compute for LLMs, and while you can easily buy hardware that is fast for consumer prices, you cannot fit modern top tier LLMs into memory on a budget (OOM: $50,000 to host a modern SOTA LLM) and memory requirements are trending up, not down. I have a high end machine I use for local inference, but the capabilities of my local model pale in comparison to what I can do on $200,000 worth of VRAM. I don’t use it enough to fully utilize such a machine, so even if I could afford a heavy hitter like that, it would be a waste of money because the hardware will go out of date before I get my money’s worth compared to shared resource.

I feel like the pay-per-request is a notably different use case from one that needs rate limiting. Most providers will happily accept as much spam as you want to send as long as you are paying for every request. They can setup auto-scaling so more requests just means more money for them and it doesn’t negatively impact other users.

Separately, free services benefit greatly from a bonded rate limit, where if a user wants to spam they have to lock up a very large amount of money because spamming too much with a single bond will result in slashing, so spammers have to bond millions of accounts to spam (thus requiring millions in lockup).

It is unclear to me what the scenarios are where you need both of these at the same time. Either you have a pay-per-request service where spam isn’t an issue, or you have a free service where there is no payment per request.

Could these be two separate protocols, each optimized for their relevant use case?

Two observations from the inference serving side (I used to work as a machine learning engineer at Hugging Face, @omarespejel in X):

1. The refund mechanism leaks more than refund values.

The server doesn’t just observe C_max - C_actual. In production LLM inference, each request produces a rich feature vector: output token count, time-to-first-token (which correlates with input length and KV cache state; vLLM’s automatic prefix caching is the canonical example), generation latency, and if the server uses speculative decoding, the draft-model acceptance rate, which varies systematically by prompt domain and task type (see “The Disparate Impacts of Speculative Decoding,” arXiv:2510.02128). Over N requests, a straightforward clustering algorithm on these features re-links anonymous requests to the same user, even with perfect nullifier unlinkability. This is traffic analysis, same class of attack as flow correlation in Tor (DeepCorr achieves ~96% correlation accuracy from flow metadata, arXiv:1808.07285).

This isn’t theoretical. vLLM’s chunk-based prefix caching had a documented timing side channel (CVE-2025-46570, GHSA-4qjh-9fv9-r85r), where cache-hit timing differences achieved an AUC of 0.99 with 8-token prefixes, enough to verify whether two requests share context. Patched in vLLM 0.9.0, but the fundamental issue is architectural: any shared-cache inference server leaks request similarity through timing unless explicitly mitigated.

2. We can eliminate the refund circuit entirely, and probably should.

Instead of the server issuing signed refund tickets for C_max - C_actual (which requires the ZK circuit to verify server signatures and sum refund accumulators), have the user commit to an output token budget T_out from a small set of fixed classes (e.g., 256 / 512 / 1024 / 2048 tokens). The server generates up to T_out tokens and charges a flat price(T_in_class) + price(T_out). Users select from the same fixed set of input-length classes. Each (input class Ă— output class) cell provides k-anonymity; every request in a cell looks identical from a billing perspective.

No refund, no variable signal, no server-signed tickets, no accumulator, no refund summation circuit. The protocol gets dramatically simpler. The trade-off is ~20-40% cost overhead due to unused token budget, but inference costs are dropping fast enough that this is tolerable, and it’s a strict improvement on the privacy/complexity Pareto frontier.

This also resolves a trust assumption in the current design: with variable-cost refunds, the server reports C_actual and the anonymous user cannot dispute without deanonymizing. A malicious server can under-report refunds to extract surplus. With flat pricing per class, there’s nothing to misreport.

For the remaining timing side channels (TTFT, generation latency), the combination of quantized input classes and padded output results in the server seeing approximately the same resource profile for every request in a given cell.