ZK API Usage Credits: LLMs and Beyond

Davide Crapis and Vitalik Buterin

A core challenge in API metering is achieving privacy, security, and efficiency simultaneously. This is particularly critical for AI inference with LLMs, where users submit highly sensitive personal data, but applies generally to any high-frequency digital service. Currently, API providers are forced to choose between two suboptimal paths:

  1. Web2 Identity: Require authentication (email/credit card), which links every request to a real-world identity, creating massive privacy leaks and profiling risks.
  2. On-Chain Payments: Require a transaction per request, which is prohibitively slow, expensive, and makes it difficult to obfuscate the full user’s transaction graph.

We need a system where a user can deposit funds once and make thousands of API calls anonymously, securely, and efficiently. The provider must be guaranteed payment and protection against spam, while the user must be guaranteed that their requests cannot be linked to their identity or to each other. We focus on LLM inference as the motivating use case, but the approach is general and also applies to RPC calls or any other fixed-cost API, image generation, cloud computing services, VPNs, data APIs, etc.

Examples:

  1. LLM inference: A user deposits 100 USDC into a smart contract and makes 500 queries to a hosted LLM. The provider receives 500 valid, paid requests but cannot link them to the same depositor (or to each other), while the user’s prompts remain unlinkable to the user identity.
  2. Ethereum RPC: A user deposits 10 USDC and makes 10,000 requests to an Ethereum RPC node (e.g., eth_call / eth_getLogs) to power a wallet, indexer, or a bot. The RPC provider is protected against spam and guaranteed payment, but cannot correlate the requests into a persistent user profile.

Proposal Overview: We leverage Rate-Limit Nullifiers (RLN) to bind anonymity to a financial stake: honest users who stay within protocol limits remain unlinkable, while users who double-spend (or otherwise exceed their allowed capacity) cryptographically reveal their secret key, enabling slashing. We design the protocol to work when API usage incurs variable costs, but it also directly supports the simpler fixed-cost-per-call as a special case.

We use a flexible accounting protocol in which each request sets a maximum cost per call up front and once the actual cost is determined at the end of the call the server issues a refund. Users privately accumulate signed refund tickets to reclaim unused credits and unlock future capacity even when the actual per-call cost is only known after execution. A Dual Staking mechanism lets the server enforce compliance policies while remaining publicly accountable.

ZK API Usage Credit Protocol

The protocol utilizes server refunds paired with refund accumulation and a proof-of-solvency on the client side. The model enforces solvency by requiring the user to prove that their cumulative spending—represented by their current ticket index—remains strictly within the bounds of their initial deposit and their verified refund history.

Anti-spam protection is enforced economically: a user’s throughput is naturally capped by their available deposit buffer, while any attempt to reuse a specific ticket index (double-spending) is prevented by the Rate-Limit Nullifier.

Primitives

  • k: User’s Secret Key.
  • D: Initial Deposit.
  • C_{max}: The maximum cost per request (deducted upfront).
  • i: The Ticket Index (A strictly increasing counter: 0, 1, 2, \dots).
  • \{r_1, r_2, \dots, r_n\}: A private collection of signed Refund Tickets received from the server.

Protocol Flow

Registration
The user generates secret k, derives an identity commitment ID = Hash(k), and deposits D into the smart contract. The contract inserts ID into the on-chain Merkle Tree.

Refund Collection (Asynchronous)
After a request is processed, the Server provides a signed Refund Ticket r = \{v, \text{sig}\}, where v is the refund value and \text{sig} is the Server’s signature over v (and potentially a unique request ID). The user stores these locally.

Request Generation (Parallelizable)
The user picks the next available Ticket Index i. They can generate multiple requests (e.g., tickets i, i+1, i+2) simultaneously.

The user generates a ZK-STARK \pi_{req} proving:

  1. Membership: ID \in MerkleRoot.

  2. Refund Summation:
    The circuit takes the list of refund tickets \{r_1, \dots, r_n\} as private inputs.

    • For each ticket, the circuit:
      • Verifies the Server’s signature.
      • Extracts the value v_j.
    • The circuit calculates the sum: R = \sum_{j=1}^{n} v_j.
  3. Solvency (The Credit Check): The total potential spend at index i is covered by the deposit plus the sum of all verified refunds:

    (i + 1) \cdot C_{max} \le D + R.

  4. RLN Share & Nullifier:

    • Slope: a = Hash(k, i).
    • Signal:
      • x= Hash(M),
      • y = k + a \cdot x.
    • Nullifier: Nullifier = Hash(a).

Submission

User sends: Payload (M) + Nullifier + Signal (x, y) + Proof.

Verification & Slashing
The Server checks the Nullifier in its “Spent Tickets” database:

  • Fork/Double-Spend Check: If the Nullifier exists with a different x (Message), the user tried to spend the same ticket on two different requests. Solve for k and SLASH.
  • Solvency Check: Verify \pi_{req} to ensure the ticket index i is authorized by the user’s current funding level.

Settlement

  • Server executes request.
  • Refund: Server issues a signed Refund Ticket r = (C_{max} - C_{actual}).
  • User adds r to their accumulator to “unlock” higher ticket indices for future use.

Server-Side Accountability (Dual Staking)

To deter API abuse beyond simple rate-limiting (e.g., violating Terms of Service, generating illegal content, or jailbreaking attempts), we introduce a secondary staking layer. For example, a user might submit a prompt asking the model to generate instructions for building a weapon or to help them bypass security controls – requests that would violate many providers’ usage policies. We would like a way to enforce such policies without giving the provider a straightforward way to profit from false positives.

Concretely:

The user deposits a total sum Total = D + S.

  • D (RLN Stake): Governed by the math of the protocol. Can be claimed by anyone (including the Server) who provides mathematical proof of double-signaling (revealed secret k).
  • S (Policy Stake): Governed by Server Policy. Can be slashed (burned), but not claimed, by the Server if the user violates usage policies.

The purpose of doing this, instead of simply setting D higher, is to remove the server’s incentive to fraudulently take away users’ deposits, which could be high depending on how high the deposit is.

Slashing Mechanism for S

If a user submits a valid RLN request that violates policy (but does not trigger the mathematical double-spend trap):

  1. Violation: Server detects policy violation in the request payload (e.g., prohibited content).
  2. Burn Transaction: The Server calls a slashPolicyStake() function on the smart contract.
    • Input: The Nullifier of the offending request and the ViolationEvidence (optional hash/reason).
    • Action: The contract burns amount S from the user’s deposit.
    • Constraint: The Server cannot claim S for itself, it is sent to a burn address. This prevents the server from being incentivized to falsely ban users for profit.
  3. Public Accountability: The slashing event is recorded on-chain with the associated Nullifier. While the user’s identity remains hidden, the community can audit the rate at which the Server burns stakes and the posted evidence for these burns.

Alternative Logic: Homomorphic Refund Accumulation

For the updated spec that presents only the new logic throughout, see here.

As an alternative to maintaining an ever-growing list of refund tickets, we can use additively homomorphic encryption (e.g., Pedersen Commitments or Lattice-based HE for post-quantum security). This design allows the server to update a user’s total credits without learning the total balance, while keeping the client-side data and ZK circuit complexity constant.

Primitives

  • E(R): A homomorphic encryption of the user’s total refunds (R) received so far.
  • \sigma_{srv}: A server-issued signature over the current encrypted total E(R).
  • r: The specific refund value for the current request (where r = C_{max} - C_{actual}).

Updated Logic Flow

Passive Collection (Server-Side Update)
Instead of the user storing individual receipts, the server performs the credit update homomorphically during the settlement phase:

  • Computation: The server homomorphically adds the new refund r to the user’s provided commitment: E(R_{new}) = E(R) \oplus E(r).
  • Attestation: The server signs the new ciphertext: \sigma_{new} = \text{Sign}_{srv}(E(R_{new})).
  • Delivery: The user receives (E(R_{new}), \sigma_{new}, r) and updates their local plaintext balance R and blinding factors accordingly.

Request Generation (Privacy Wrapping)
To maintain anonymity, the user does not reveal the signature or commitment directly to the server. Instead, they are “wrapped” as private witnesses within the ZK-STARK \pi_{req}:

  • Signature Verification: The circuit verifies that the provided \sigma_{srv} is a valid signature from the server’s public key over E(R).
  • Commitment Opening: The circuit verifies the user knows the secret opening (the value R and blinding factors) for the commitment E(R).
  • Solvency Constraint: Same as before, the circuit asserts (i + 1) \cdot C_{max} \le D + R.
16 Likes

Really cool approach. Was looking at similar directions but got blocked by cost and efficiency.

1 Like

The system above is nice CS, but there is really no market for it.

It would be useful to conduct a basic market study to determine who actually needs this, otherwise it is a textbook example of a solution in search of a problem.

People who truly want privacy will likely choose on-site compute. There is little point in hiding identity if the prompts are not protected.

Inference costs per call are decreasing rapidly due to ASICs, so metering per request is unlikely to be something people will care about in the long term.

3 Likes

I would prefer a service with ZK payment over one without.

I hate monthly billing for AI inference and refuse to use anything that isn’t metered. Electricity has scaled by many many orders of magnitude, and metering still makes sense at every step along the way.

I have looked into on-site compute for LLMs, and while you can easily buy hardware that is fast for consumer prices, you cannot fit modern top tier LLMs into memory on a budget (OOM: $50,000 to host a modern SOTA LLM) and memory requirements are trending up, not down. I have a high end machine I use for local inference, but the capabilities of my local model pale in comparison to what I can do on $200,000 worth of VRAM. I don’t use it enough to fully utilize such a machine, so even if I could afford a heavy hitter like that, it would be a waste of money because the hardware will go out of date before I get my money’s worth compared to shared resource.

2 Likes

I feel like the pay-per-request is a notably different use case from one that needs rate limiting. Most providers will happily accept as much spam as you want to send as long as you are paying for every request. They can setup auto-scaling so more requests just means more money for them and it doesn’t negatively impact other users.

Separately, free services benefit greatly from a bonded rate limit, where if a user wants to spam they have to lock up a very large amount of money because spamming too much with a single bond will result in slashing, so spammers have to bond millions of accounts to spam (thus requiring millions in lockup).

It is unclear to me what the scenarios are where you need both of these at the same time. Either you have a pay-per-request service where spam isn’t an issue, or you have a free service where there is no payment per request.

Could these be two separate protocols, each optimized for their relevant use case?

4 Likes

Two observations from the inference serving side (I used to work as a machine learning engineer at Hugging Face, @omarespejel in X):

1. The refund mechanism leaks more than refund values.

The server doesn’t just observe C_max - C_actual. In production LLM inference, each request produces a rich feature vector: output token count, time-to-first-token (which correlates with input length and KV cache state; vLLM’s automatic prefix caching is the canonical example), generation latency, and if the server uses speculative decoding, the draft-model acceptance rate, which varies systematically by prompt domain and task type (see “The Disparate Impacts of Speculative Decoding,” arXiv:2510.02128). Over N requests, a straightforward clustering algorithm on these features re-links anonymous requests to the same user, even with perfect nullifier unlinkability. This is traffic analysis, same class of attack as flow correlation in Tor (DeepCorr achieves ~96% correlation accuracy from flow metadata, arXiv:1808.07285).

This isn’t theoretical. vLLM’s chunk-based prefix caching had a documented timing side channel (CVE-2025-46570, GHSA-4qjh-9fv9-r85r), where cache-hit timing differences achieved an AUC of 0.99 with 8-token prefixes, enough to verify whether two requests share context. Patched in vLLM 0.9.0, but the fundamental issue is architectural: any shared-cache inference server leaks request similarity through timing unless explicitly mitigated.

2. We can eliminate the refund circuit entirely, and probably should.

Instead of the server issuing signed refund tickets for C_max - C_actual (which requires the ZK circuit to verify server signatures and sum refund accumulators), have the user commit to an output token budget T_out from a small set of fixed classes (e.g., 256 / 512 / 1024 / 2048 tokens). The server generates up to T_out tokens and charges a flat price(T_in_class) + price(T_out). Users select from the same fixed set of input-length classes. Each (input class Ă— output class) cell provides k-anonymity; every request in a cell looks identical from a billing perspective.

No refund, no variable signal, no server-signed tickets, no accumulator, no refund summation circuit. The protocol gets dramatically simpler. The trade-off is ~20-40% cost overhead due to unused token budget, but inference costs are dropping fast enough that this is tolerable, and it’s a strict improvement on the privacy/complexity Pareto frontier.

This also resolves a trust assumption in the current design: with variable-cost refunds, the server reports C_actual and the anonymous user cannot dispute without deanonymizing. A malicious server can under-report refunds to extract surplus. With flat pricing per class, there’s nothing to misreport.

For the remaining timing side channels (TTFT, generation latency), the combination of quantized input classes and padded output results in the server seeing approximately the same resource profile for every request in a given cell.

1 Like

Well even if it is metered there is little need to use blockchain or ZK. You simply pay to
OpenAI monthly. OpenAI already sells credits without blockchain. I do not personally see the reason why they OpenAI needs to switch to blockchain. In fact they will not be able to do it based on KYC/AML

As far as anonymous agents selling services to other anonymous agents on blockchain, this will probably not be something the society will allow, the moment this happens you get lots of really bad agents. You do need some sort of KYC/AML whether people like it or not, both for agents and for users.

IMHO the problem is unsolvable, in a sense that if someone uses this service for a long time the context and the prompts will de-anonymize the user anyway.

Most large corps use Gemini and Co-Pilot and could not care less about privacy since they anyway have their data on Google Cloud or Microsoft Cloud, so adding Gemini or Co-Pilot to the equation does not change things much for them.

In cases where privacy is needed (like government) they will just run their own models. Once models settle, the inference costs on specialized ASICs will be tiny, even compared to GPUs of today.

1 Like

If you can deposit/withdraw into a contract anonymously (and therefore break linkability), and you trust the server providing the API, could a simpler design be a state channel between the client and the server? At least this allows you to break the direct link to a web2 identity, and avoids limits around a transaction per request.

A state channel would certainly be simpler, but it would still correlate requests with each other. The big win from de-correlating requests is that the vendor cannot build a profile of you over time. The profile they have is limited to each individual request.

Pragmatically this means that if you ask an AI about a local restaurant, and then in another context window you ask it how to overthrow the Iranian government, the LLM provider cannot connect the two so they don’t know that the person trying to overthrow the Iranian government is also a person who likes to eat at .

1 Like

I don’t think this proposal involves a blockchain? Perhaps I missed that in the technical details though. More generally though, I don’t expect OpenAI, Anthropic, or Google to immediately adopt something like this. The hope would be that more privacy focused providers like PPQ.ai and venice.ai adopt it, followed by traditional third party providers like together.ai, and then maybe in a bright future it gets normalized to the point where OpenAI, Anthropic, and Google are forced to adopt it to avoid losing all of their customers.

1 Like

Great post. The core pattern here, deposit once, transact instantly many times with privacy, unlinkability and cryptographic double-spend protection, is something we published in 2022-2023 during my PhD at COSIC, KU Leuven.

“Nirvana: Instant and Anonymous Payment-Guarantees” (ePrint 2022/872) and “Reusable, Instant and Private Payment Guarantees for Cryptocurrencies” (ACISP 2023, ePrint 2023/583).

Your RLN approach and our randomness-reusable threshold encryption (RRTE) solve the same problem with different primitives. Honest users stay anonymous and unlinkable, double-spenders get revealed. Same guarantees, different crypto machinery.

This research became the basis for 4Mica ( check out 4mica.xyz), a credit clearing layer for autonomous agents. We’re live on testnet with SDKs in Rust, Python and TypeScript. The deposit, cryptographic payment guarantee, instant execution, batched settlement flow is already working, and I will be releasing a demo very soon on X :slight_smile:

The variable-cost refund model and dual staking for policy enforcement are useful additions, especially the burn-not-claim separation which removes the server’s incentive to fabricate violations. These are composable on top of existing credit infrastructure like ours since the privacy layer and the settlement layer are architecturally separable.

Curious how you see this extending to agent-to-agent payments where both sides are autonomous. That’s where we’ve been focused and where the policy violation framing gets less clear.

Prompts will deanonymize the user anyway

fair point. LLM-based stylometric attribution is real and improving (see “De-Anonymization at Scale,” arXiv:2601.12407, tournament-style authorship matching across tens of thousands of candidates). But this protocol solves payment unlinkability, not content anonymity. Those are separable layers. You don’t reject HTTPS because browser fingerprinting exists

No point hiding identity if prompts aren’t protected

agreed, but again separable layers. GPU TEE enclaves handle prompt confidentiality. This protocol handles payment unlinkability. Neither alone is sufficient

1 Like

nice writeup. we’ve been building this on Zeko rollup, so will add in the credits model and share here as soon as it’s available

but one question, what about the UX and proving speed tradeoff of RLN vs letting the models handle rate-limiting. is the design intent or user demand to be slashed and still remain anonymous?

1 Like

This was a very interesting read, and I really like how elegantly the accounting model and rate-limiting can be combined. Thanks!

Curious to know if you have any thoughts on how to prevent users from self-slashing? In other words, I assume the servers will periodically claim payment by submitting some batch of proofs. What is to prevent the user from “slashing” themselves and recovering D after getting some utility from the server and before the latter has claimed any payments? One solution might be to time-lock slashed funds to allow for a claims dispute, but you might have had some thoughts about this already.

1 Like

added a solution for this, it doesn’t require to fix T_out and generalizes to different APIs.

1 Like

@dcrapis, thanks for the reply

Two observations on the v2 flow:

1. E(R) linkability. The server computes E(R_new) = E(R) + E(r) at settlement, so it necessarily sees E(R). Since it signed that exact E(R) in the previous round, it can correlate submissions across requests. The spec states E(R) is “wrapped in the ZK proof to ensure unlinking” (Request Generation, step 3), but the Submission payload lists current E(R) as sent to the server in the clear. I wrote a short simulation (gist.github.com/omarespejel/c3f4f2aa12b1de10467601d77d0e6232) showing full per-user chain recovery from the settlement log alone. Re randomizing E(R) before each submission and proving equivalence inside the ZK proof would resolve this. Pedersen commitments support re-randomization natively, but the server signature over E(R) must also survive re-blinding, requiring something like BBS+ or blind signatures. Worth specifying in the protocol flow

2. Parallelization. v1 allows generating tickets i, i+1, i+2 simultaneously, but in v2 E(R) only updates after settlement. Parallel requests therefore carry the same stale E(R). Does the solvency check still hold, or does the deposit need to be over-provisioned by batch_size * C_max?

I have bandwidth to contribute beyond the simulation above. Would be glad to work on a PoC implementation of either variant, or help formalize the re-randomization step if that would be useful. The padded-class approach from my earlier comment and the homomorphic E(R) approach could work well as complementary protocol modes depending on the use case

Thank you.

Explicitly added rerandomization to the v2 note for unlinkability.

For parallelization in the case with HE, the overprovisioning for parallel requests is forced by the fact that you need to pick incremental ticket index (so no need batch_size * C_max). However, if we want to limit refund aggregation operations on the user side, parallelization is not straightforward in this case. Removed parallelization from v2 note for now, we’ll need to think to trade-offs.

1 Like

How would you handle W-9 forms in this case? I would imagine most service providers are simply trying to run a legal business and be compliant regarding taxes.

@dcrapis the demo with most of your proposal implemented on Zeko. what do you think?

on github: zeko-labs/developer_demos

there’s also a UI I can share

note, it doesn’t include the rate-limiting and slashing anonymization (can be done, but with greater complexity and slower proof generation speed). some other decisions around hosting or UI admin were made for demo expediency (can be improved later for production by others)