The road to Post-Quantum Ethereum transaction is paved with Account Abstraction (AA)

Thanks to Nicolas Bacca, Vitalik Buterin, Nicolas Consigny, Renaud Dubois, Simon Masson, Dror Tirosh,Yoav Weiss and Zhenfei Zhang for fruitfull discussions.

This is Part 3 of our series exploring the feasibility of implementing a post-quantum signature scheme for Ethereum. In Part 1, we discussed the fundamental challenges and considerations involved in transitioning Ethereum to a quantum-resistant future. In Part 2, we took a deep dive into Falcon, analyzing its strengths, weaknesses, and the practical hurdles of integrating it into Ethereum’s transaction framework. In this installment, we build on that foundation by exploring how account abstraction (AA) can be leveraged to integrate Falcon into Ethereum. We’ll examine the architectural changes required, the benefits of using AA for post-quantum security, and the potential challenges in making this approach viable.

Did you say ERC-4337?

When discussing account abstraction (AA), the natural conclusion is to think about ERC-4337, as it is currently the most prominent and widely adopted approach to enabling AA on Ethereum. ERC-4337 provides a way to implement smart contract wallets without requiring changes to the Ethereum protocol, making it a strong candidate for integrating post-quantum signature schemes like Falcon.
In particular, we can take inspiration from the SimpleWallet smart contract or from smart contracts leveraging RIP-7212 to explore how Falcon can be efficiently integrated within the ERC-4337 framework.

SimpleWallet

The SimpleWallet is a smart contract-based wallet designed to implement Account Abstraction on Ethereum. Instead of using traditional private keys for transactions, a SimpleWallet smart contract allows for greater flexibility by enabling custom validation logic and potentially supporting new cryptographic signature schemes like Falcon. For instance, in the context of post-quantum Ethereum, the SimpleWallet could be adapted to work with Falcon signatures, allowing for more flexible, secure, and future-proof transaction processing. This smart contract approach would allow Ethereum accounts to evolve and support post-quantum cryptography without requiring changes to the underlying Ethereum protocol.

FalconSimpleWallet

A FalconSimpleWallet would be a modified version of SimpleWallet that replaces ECDSA with Falcon-based cryptography. Unlike ECDSA, “plain” Falcon does not support public key recovery from a signature—meaning that ecrecover cannot be used. Instead, a Falcon-based wallet must verify signatures directly against a stored public key.
However, as Renaud Dubois pointed out, Section 3.12 of the Falcon paper introduces a key recovery model. This method allows for public key recovery, but it comes at the cost of doubling the signature size. While this could provide a potential workaround for ecrecover-like functionality, the increased key size presents additional considerations for on-chain efficiency.

This difference means that Falcon-based wallets need an explicit mapping of Ethereum addresses to public keys, requiring a different approach to authorization. Rather than relying on ecrecover to derive the signer’s identity, a FalconSimpleWallet would explicitly store and reference public keys for verification.

Additionally, integrating Falcon into the Ethereum Virtual Machine (EVM) requires deviating from the NIST standard implementation. Falcon relies on SHAKE for hashing, but since SHAKE is not natively supported in the EVM, we need to use a more EVM-friendly hash function, such as Keccak. This ensures compatibility and efficiency when verifying Falcon signatures on-chain.

Kudos to Zhenfei Zhang, who contributed a Keccak256-based PRNG implementation for Falcon, further bridging the gap between Falcon and Ethereum’s cryptographic stack.

Show Me the Demo!

You can find the demo in FalconSimpleWallet on GitHub. This project showcases a wallet that replaces traditional ECDSA with Falcon-based verification, tailored for Ethereum’s evolving security needs.

A special shout-out to ZKNox—their exceptional work on the Falcon Solidity implementation has dramatically cut verification costs from 24M gas down to 3.6M gas. This impressive gas optimization brings post-quantum security a step closer to practical deployment on the blockchain. Kudos to ZKNox for their remarkable contribution!

The elephant in the room

While we have successfully transitioned the smart wallet signature to be post-quantum (PQ) resistant, there remains a critical issue: the bundler transaction still relies on the traditional ECDSA signature scheme. This means that even though individual user operations (UserOps) within the account abstraction framework can use Falcon, the final transaction submitted to the Ethereum mempool is still signed with ECDSA by the bundler.

To fully remove ECDSA from the transaction pipeline, changes at the L1 protocol level will likely be required, specifically via EIP-7701/RIP-7560.

(Bonus part) Batching

As mentioned in the “Gnarly” section of Part 2, there has been ongoing research into efficiently aggregating Falcon signatures, including work involving Labrador. If this approach proves efficient, we could leverage EIP-7766 (Signature Aggregation for ERC-4337) to optimize Falcon signature aggregation within the AA framework—similar to how BLS signatures are aggregated in this VerificationGateway contract.

No soup (EIP-7702) for you!

As discussed in the context of EIP-7702, the proposal might allow turning an account into an ERC-4337 account and adding Falcon support, but it still retains the ECDSA key. The problem with EIP-7702 is that the ECDSA key remains valid within this framework, which introduces a potential security risk. Even if the account starts using Falcon after setting the code, the presence of the ECDSA key leaves the account exposed. An attacker could potentially recover and misuse the ECDSA key to compromise the account.

This is why EIP-7702 is problematic from a quantum-resilience perspective: it enshrines ECDSA, which is vulnerable to quantum attacks. Instead, the focus should be on native Account Abstraction (AA), which removes any reliance on ECDSA and offers a more robust, quantum-resistant approach through smart contract wallets like the SimpleWallet. solution above.

Conclusion

In this installment, we’ve explored how Account Abstraction (AA) can be leveraged to integrate Falcon, a post-quantum signature scheme, into Ethereum. By transitioning to a Falcon-based smart wallet signature, we can ensure a future-proof, quantum-resistant approach to Ethereum transactions.

While the adoption of Falcon-based wallets within the AA framework is a promising step, the ongoing reliance on ECDSA signatures for bundler transactions still presents a challenge. Overcoming this requires protocol-level changes, likely through EIP-7701 or RIP-7560, to fully eliminate ECDSA from the transaction pipeline.

Additionally, research into signature aggregation for Falcon, as discussed in the “Gnarly” section of Part 2, presents an opportunity to further optimize Falcon’s integration in the Ethereum network, particularly with the potential adoption of EIP-7766 for ERC-4337.

However, since we are still using a smart contract for Falcon, which currently costs about 3.7M gas per transaction, the next logical step is to move toward a RIP for Falcon, which would aim to optimize its integration and bring gas costs down for practical, on-chain use.

In conclusion, while we’ve made significant progress in integrating post-quantum security into Ethereum, there are still key challenges to address at both the bundler and protocol levels to ensure a complete transition to a quantum-resistant future.

11 Likes

Looking at key recovery, if the public key is transmitted along the verification (while it is implicit in recover), then the difference between recovery and original scheme is in favor of the recovery version, because a public key (incompressible polynomial, 896 bytes) is replaced by the s2 field of falcon (which can be compressed to 630 bytes) and a hash (32 bytes).

Of course for the current experimentations, we don’t have the implicit, but we could imagine to have this public key hashed in the smart account storage, verified during deployment.

2 Likes

There are two EIPs proposing a precompile for the current NIST Falcon variant

Of the two, EIP-7619 appears to be closest to what AA would need and the most versatile, as it accepts the entire message rather than a pre hashed message (note that Falcon salts it’s messages prior to signing with a signature specific salt). It is the EIP I am attempting to revive for a precompile.

1 Like

Hi @asanso, thank you for this excellent series exploring post-quantum Ethereum and Falcon integration via account abstraction.

I’m working on a hybrid RNG architecture designed to bridge physical entropy sources, chaotic amplification, and cryptographic extraction (SHAKE/Keccak) compatible with Ethereum’s keccak-based commitments and intended for seeding PQ signatures like Falcon or as randomness for ERC-4337 workflows.

I’ve published a sanitized specification + demo repo (no private parameters) hybrid-chaos-quantum-rng . I’d be happy to share the full implementation under NDA or collaborate on integrating with your FalconSimpleWallet designs.

Looking forward to feedback and discussion.

Nomadu27

1 Like

This is excellent work. I’d like to add one angle from the perspective of entropy, attestation surfaces and protocol-level PQ readiness.

Even if the signature layer inside the AA wallet becomes post-quantum (e.g., Falcon, Dilithium, ML-DSA), the overall execution path is still constrained by several classical components:

  1. Bundler ECDSA Envelope
    As noted, the ECDSA-signed bundler transaction currently becomes the lowest-security assumption. A quantum-capable adversary can break the bundler envelope long before they can break the Falcon-based UserOp. This creates a dual-layer security asymmetry where the weaker L1 signature dominates the trust model.

  2. Entropy + VRF Dependencies
    A large part of Ethereum’s security — sequencing, ordering, randomness, L2 attestation, MEV relay protocols — critically depends on classical entropy sources and ECDSA-based attestations. Without PQ-secure randomness or VRF-like mechanisms, PQ wallets still operate in a classical entropy domain with known long-term vulnerabilities.

  3. Lack of PQ-compatible commitments
    ERC-4337 relies heavily on hashing + signature verification for replay protection, domain separation and UserOp authentication. Until the comms layer (hashing + attestation) becomes PQ-safe, the end-to-end pipeline cannot be considered quantum-resistant, regardless of the user wallet scheme.

  4. Protocol alignment
    This is where RIP-7560 / EIP-7701 really matter. Native PQ envelopes on L1 would allow AA wallets + bundlers + sequencing to align under consistent PQ assumptions. Without this, the strongest primitive (Falcon) is still bottlenecked by the classical envelope (ECDSA).

Given the impressive results from the FalconSimpleWallet demo (particularly ZKNox’s gas optimizations), it feels natural to explore:

  • precompile-level Falcon verification (to eliminate the 3.7M gas cost),
  • PQ VRF constructions suitable for AA + bundlers,
  • signature aggregation benchmarks (EIP-7766),
  • and ultimately, a pathway toward RIP-level Falcon integration.

Great article — looking forward to deeper exploration of PQ attestation and entropy surfaces in future parts.

1 Like

Bundler ECDSA Envelope

This is exactly why we need something like EIP-7701, ie. AA as a protocol-level feature. We need to de-enshrine ECDSA from the protocol fully.

sequencing, ordering

BLS-based RANDAO can easily be replaced with hash-based, in fact hash-based was the original proposal. It’s just somewhat less efficient because you need to update the RANDAO value every time there’s a proposal (but that’s fine).

L2 attestation

We need off-chain proof aggregation to make STARKs truly viable for this. See here for how it can be implemented.

MEV relay protocols

I don’t see why this can’t be quantum-resistant? eg. ePBS can easily be made quantum-resistant

So all of these problems have solutions, but yes they do require building out a few important components.

3 Likes

BLS-based RANDAO can easily be replaced with hash-based

The original hash-based RANDAO (outlined here by V) has exactly the same biasability problems (last-revealer manipulation attacks (aka selfish mixing), and forking attacks) just like the current BLS-based construction. Swapping out the cryptographic component to a post-quantum secure one does not automatically solve the randomness beacon’s biasability issues. If this is a concern (I’d argue it is quite a concern), then we also need to redesign the beacon protocol itself.

Regarding the hash-based replacement for RANDAOdo you see room for hybrid constructions that mix entropy commitments with PQ signatures during the transition away from ECDSA?…I’m exploring this direction conceptually and curious about your perspective

Following on this (and thanks for the clarification above):
one thing I am still trying to understand is whether any lightweight accountability layer during the transition could meaningfully raise the cost of last-revealer manipulation.

More concretely:

If we keep the RANDAO structure unchanged, but add a “binding reveal” step — e.g., hash-commit + PQ signature tied to the slot/epoch — does this reduce the short-term biasability at all, or does it remain essentially the same game-theoretically?

I’m not thinking about a final beacon design, just whether hybrid “commitment-plus-accountability” approaches buy us anything until a proper redesign (VDF / threshold randomness / APS-ready beacon) is ready.

Curious if you think this line of thinking has any merit,
or if the bias comes entirely from structural optionality and no signature layer can meaningfully change it.

There’s one angle I’m still trying to sanity-check:

If the biasability comes primarily from withholding optionality, then am I right that any cryptographic binding layer (PQ signatures, slot-bound commitments, etc.) is irrelevant unless it changes the payoff matrix?

In other words:

– If a validator can still conditionally reveal after seeing parts of the fork-choice state,
– and if the cost of non-reveal is indistinguishable from a missed attestation,

then no “binding reveal” construction changes anything, because the strategic surface is unchanged.

But is there any known construction where adding accountability does shift incentives even without a full VDF/threshold redesign?

I’m trying to determine whether hybrid approaches are strictly useless,
or whether there exists a narrow regime where added accountability (even off-chain evidence of selective reveals) meaningfully alters expected value for an attacker.

Curious if you’ve seen anything in that space — or if the consensus is that nothing short of redesigning the beacon protocol can influence biasability.

One aspect that I don’t think is being fully explored in this thread is the state-transition validity problem under heterogeneous signature environments once Ethereum begins introducing PQC-capable account types (whether via EIP-7701 or deeper AA enshrinement).

Even if we de-enshrine ECDSA and migrate to a PQC-first AA environment, there’s still a missing analysis for the following:

1. Hybrid-Epoch Safety Under Mixed Signature Regimes

During the transition period, block proposers will need to simultaneously validate:

  • legacy ECDSA-based transactions

  • PQC-based AA wallets (SPHINCS+, Dilithium, Picnic, SLH-DSA, etc.)

  • aggregation commitments for PQC-based attestations

  • signature-object equivalence proofs to maintain deterministic state root construction

This exposes a nontrivial state-transition race condition.
Specifically:

Ethereum has not yet defined a canonical mechanism for multi-scheme signature admit rules in the transition epoch, which means a quantum adversary could selectively target only the legacy paths and still cause proposer-level reorg leverage.

Even with ePBS + PQC upgrades, this remains unaddressed.

2. PQC-Friendly State Witness Design Is Not Defined

PQC signatures (hash-based or lattice-based) have:

  • larger public keys

  • larger signatures

  • higher verification cost variability

  • non-unique signature structures

But Ethereum’s state witness format (Verkle transition) is not yet adapted for:

  • PQ key-object encoding

  • deterministic format for PQ signature lists in bundled AA ops

  • state witness compaction under PQC objects (since SPHINCS+ can be 8–20 KB per signature)

Meaning:

Under current designs, PQC transactions will inflate witness proofs in a way that breaks the expected Verkle node size budget, unless the protocol introduces a specialized PQC-witness leaf type.

Probably we should also look at all rely on the recoverable-signature property.
Even Falcon’s Section 3.12 “recoverable mode” requires either:

  1. transmitting s₂ + signature hash + deterministic PRNG seed

  2. or precommitting the public key hash in contract storage

which cannot be lifted into consensus without native format standardization.

Yeah, you’re right.

There’s a huge literature on unbiasable randomness beacons (VDFs, threshold redesign is part of that literature as you point out). The question is which one would be suitable for Ethereum’s unique setting with very specific latency and efficiency requirements. This is a wide open research engineering question IMHO. Also part of the unbiasable randomness beacon question is that in which setting we want to solve this problem? Dishonest majority? Honest majority? A recent paper shows that if you want to have an unbiased randomness beacon in the dishonest majority setting then the only way to solve this problem is to use VDFs. See it here. I don’t know much about pq-secure VDFs…

Not sure how accountability could solve the withholding/selfish mixing manipulation attacks in the current RANDAO design.

  1. Offline validators: There are legitimate reasons why a validator did not publish its block. Conversely, a RANDAO manipulator validator could aways say that it just happened to be offline or it was DoS-ed and that’s the reason it did not publish its block and the corresponding RANDAO randomness contribution. From the outside world, these two scenarios are indistinguishable.
  2. Impossibility of issuing “manipulation proofs”: Even if you would prove to a smart contract that XYZ did not publish their block, it’s not obvious whether they did it because of manipulating the beacon. Since, the public does not see their hidden, non-published RANDAO contribution(s), the public cannot recompute the beacon state with the hidden RANDAO contribution. See Section 3.4. here, where I explain this better.

Thanks for the pointer — I read the paper you linked.

A takeaway that clicked for me is the following:

even if a commitment is fully binding (hash-based or PQ-signed), the proposer still faces the same binary branch — reveal vs withhold.

Because this decision sits inside the consensus flow itself, the optionality — and therefore the bias — survives regardless of how strong the commitment layer is.

The signature layer can certify what was committed,

but it cannot remove the fact that skipping a reveal produces a different state transition and therefore a different proposer-selection outcome.

That structural branch seems to be the real source of bias, not the cryptographic primitive.

So hybrid “commitment + PQ attestations” may raise accountability, but it does not meaningfully change the game-theoretic incentive that the paper describes.

1 Like

Thanks — this clarification helps frame the transition path much more clearly.

I’m still trying to make sure I’m reasoning correctly about the entropy side of the story, especially in light of what @seresistvanandras wrote about biasability being structural rather than cryptographic.

When you say that BLS-based RANDAO can be replaced with hash-based, is it fair to think of it as:

  • from a post-quantum point of view, the randomness generation itself can already be made PQ-safe with hashes, while

  • the remaining risk is mostly about structural optionality (last-revealer, selfish mixing, forking attacks), not about the hardness assumption?

If that’s the right mental model, then my earlier concern about the “classical attestation layer limiting the security of the entropy pipeline” may be misplaced, because hash-based RANDAO would already be PQ-sound at the primitive level — and the remaining problems are economic / structural rather than quantum.

On the other hand, if there is still an intermediate phase where RANDAO contributions and/or their inclusion are effectively governed by classical signatures and incentives (e.g. during validator / client migration), I’m wondering whether you see any value in lightweight accountability layers during that window — for example:

  • dual-signed commitments (classical + PQ) bound to slot / epoch, or

  • slot-scoped VRF-style proofs that make manipulation more forensically evident, even if they don’t fix biasability.

Not as a final beacon design, just as transitional “entropy hardening” modules that raise the cost of abuse until a full APS-ready / VDF / threshold randomness redesign lands.

Curious whether you think this line of thinking is conceptually useful, or whether the structural optionality of current RANDAO makes such intermediate layers essentially ineffective regardless.

1 Like

I pretty much agree with everything what you wrote in your last two comments.

I would frame the two problems (pq-security and beacon unbiasability) as orthogonal problems. Pq-security is a theoretical-cryptographic problem of the constituent cryptographic algorithms (signatures, randomness contributions, commitments, (verifiable) random functions, etc.), while unbiasability is a protocol-level problem that already assumes the above-mentioned pre- or post-quantum secure building blocks.

PQ-security of the beacon is easy to solve as Vitalik pointed out above by swapping out BLS-signatures as randomness contributions to preimages in a validator-generated hash-chain. This is likely even faster than the pre-quantum BLS-based RANDAO construction!

Unbiasability is a completely different beast. There are already proposals to try to minimize the biasability of the RANDAO. See, e.g., this great ethresear.ch post.

With regards to a lightweight accountability layer. Honestly, I don’t see much value in it.
Pragmatically, one would correct the design of Ethereum’s distributed randomness beacon once and for all. I don’t see much value in incremental patchwork-style approaches on this matter. These are my two arguments to back this up:

  1. Dual-signed commitments: the addition of dual-commitments (pre- and post-quantum) do not solve any of the biasability issues (selfish mixing, forking attacks) but make the beacon less space- and time-efficient thanks to the increased cryptographic workload.
  2. “Proving” beacon manipulation: again, this is not a pq-security issue. As I argued above and also in our paper, forking attacks are provable and evident for the public. While, selfish mixing cannot be made accountable, as there are missing information on-chain, i.e., the missed RANDAO randomness contributions that would allow us recomputing the necessary counterfactual RANDAO states that only the manipulative adversary sees given her hidden randomness contributions. Thus, selfish mixing cannot be made accountable in a publicly verifiable manner (unless all the RANDAO contributions are visible to everyone which is not the case in selfish mixing by definition).

Thanks — and thanks for pointing to the Unpredictable RANDAO post.
That’s exactly the kind of structural redesign I had in mind for L1.

If I understood correctly, their committee-based DKG + threshold beacon cuts the adversary’s choice space from 2^k down to roughly k+1 states, but they explicitly leave two things open:

  • Section 5.1: quantitative economic analysis of the residual bias

  • Section 5.2: liveness concerns if <2/3 of the committee is online (proposals may halt)

That’s precisely why I’m asking about off-L1 environments.

For L2 sequencers, AA bundlers, and prover assignment in zk-rollups:

  • operator sets are small (often single operator or 3–5 nodes),

  • there is no strongly adversarial multi-party setting that justifies full DKG,

  • latency and liveness are critical (you can’t afford to block while a committee syncs),

  • but long-term PQ soundness and auditability still matter.

In those settings, a simpler VRF-style primitive seems more practical than a full DKG+threshold construction:

  • single-signer (or 2-of-3) randomness,

  • dual-signed (ECDSA + ML-DSA-65) for PQ robustness during the migration window,

  • slot / batch-bound commitments for basic accountability,

  • sub-millisecond generation and trivial integration on L2 or in bundlers.

This would not try to solve L1 beacon unbiasability – it would just be “good enough + PQ-ready randomness” for off-L1 components that already trust a small operator set.

So my concrete question is:

  • Do you see value in such a lightweight, PQ-ready VRF as a complementary building block for L2 sequencers / bundlers / provers, where the threat model is different from L1 beacon randomness?

Or, in your view, should even these off-L1 components eventually converge on full DKG+threshold designs (like Unpredictable RANDAO), despite the added complexity and liveness surface?

Happy to be told I’m over-indexing on “lightweight VRF” here – just trying to understand where you’d actually draw the line between “we need a full threshold beacon” and “a simple PQ-VRF is enough”.

Sure! Such a lightweight beacon may make sense for L2s, rollups, etc. It’s a free market, right? :smiley: Everybody is welcome to deploy their own randomness beacon that fits their adversarial model, latency, efficiency, and security requirements.

But at the end of the day, mostly for composability and interoperability, I’d assume that even L2s would want to have access to a global, unbiasable randomness beacon on the L1 for certain applications. (Obviously the L1 must have a source of randomness for selecting the block proposers from the validator set in a fair manner. The L1 needs randomness, as there is no deterministic and secure decentralized consensus protocol (even in synchrony) as was shown by Lewis-Pye and Roughgarden. )

1 Like

Thanks — your framing helps draw the boundary between local randomness beacons and the global, unbiasable L1 beacon much more clearly.

I fully agree that for composability and interoperability, the L1 eventually needs a randomness construction with the “minimal adversarial choice space” possible. The Unpredictable RANDAO proposal gets this down from 2^k (choice among all possible contributions) to roughly k+1 states, which is a significant structural improvement.

Where my question becomes more concrete is in the off-L1 roles — L2 sequencers, AA bundlers, and zk-prover assignment — where the adversarial setting is fundamentally different:

  • operator sets are typically single signer or small committees (3–5 nodes)

  • there is no multi-party DKG requirement

  • latency budgets are sub-millisecond, not seconds

  • sequential consistency is critical (batch → proof → settlement)

  • liveness failures are catastrophic (e.g., sequencer rotation deadlock)

From this viewpoint, I have been exploring a lightweight PQ-ready VRF that intentionally does not attempt to address the L1 beacon problem. Instead, it tries to minimize complexity while providing explicit PQ migration for components that already trust the operator.

Below is the formal model to clarify what I mean.


1. Biasability and Choice-Space Considerations

For L1 beacons, the adversarial choice-space is typically defined as:

Choices_L1 = { all possible withheld contributions }
            ≈ 2^k  (committee of size k, full optionality)

Unpredictable RANDAO reduces this to something like:

Choices_threshold ≈ k+1

This reduction is meaningful only when:

  1. There is a multi-party adversary,

  2. The protocol is permissionless,

  3. Withholding is economically rational and hard to detect.

But for L2 sequencers, bundlers, or prover rotation:

  • the adversarial choice-space is effectively 1 (single operator),

  • the operator is already trusted to produce ordered batches,

  • and sequential consistency dominates over biasability.

In other words:

Choices_L2 ≈ 1

So investing in a k-of-n threshold protocol may not reduce adversarial capability relative to operator trust assumptions — but will increase latency, fragility, and liveness exposure.

This is the main reason I have been exploring the lightweight PQ-VRF direction.


2. Formal Lightweight PQ-VRF Construction (sketch)

This construction is deliberately minimal, curve-free, and deterministic.

Generation:

seed          ← sealed entropy seed (Kyber-encrypted)
msg           ← domain-separated slot / batch identifier
C             ← keccak256(seed || msg)
k1            ← SHAKE256(C, 32)
k2            ← BLAKE2s(k1 || msg)
k3            ← keccak512(k2)
Y             ← FisherYates(k3)      // final VRF output
π             ← (C, k1, k2, timestamp, metadata)
σ_ECDSA       ← Sign_secp256k1(Y)
σ_MLDSA       ← Sign_ML-DSA-65(Y || π)

Verification:

1. Verify PQ signature:
   ML-DSA-65.Verify(pub_pq, Y || π, σ_MLDSA) = 1

2. Recompute deterministic path:
   C'  = π.commitment
   k1' = SHAKE256(C', 32)
   k2' = BLAKE2s(k1' || msg)
   k3' = keccak512(k2')
   Y'  = FisherYates(k3')

3. Verify classical signature if needed:
   recovered = ecrecover(keccak256(Y), σ_ECDSA)
   recovered == trustedSigner

4. Accept iff:
   (Y' == Y) AND ML-DSA-65 valid

No curves, no DLOG problems, no hash-to-curve, no pairings — only hash functions + lattice signatures.

This minimizes structural attack surface in the post-quantum period.


3. PQ Migration Model

Instead of a single cutover, the migration is explicitly staged:

Phase 0  — Classical compatibility
           Y, σ_ECDSA

Phase 1  — Dual signatures
           Y, σ_ECDSA, σ_MLDSA

Phase 2  — PQ-first verification
           σ_MLDSA verified on-chain (precompile or contract lib)

Phase 3  — PQ-only
           Y, σ_MLDSA

This maintains auditability of historical randomness even after ECDSA becomes breakable.

I think this matters for cross-rollup reproducibility and long-term proof verification.


4. Why I’m Asking: Where Does This Fit?

Based on your reply, it sounds like:

  • Yes: A lightweight beacon / VRF makes perfect sense for many L2 and AA use cases.

  • Yes: The L1 beacon must remain global, unbiasable, and economically consistent.

  • Yes: The two problem domains are orthogonal (biasability vs. PQ-security).

What I’m trying to establish is whether the ecosystem sees value in a clean, minimal PQ-ready VRF for:

  • L2 sequencer rotation

  • Bundler selection

  • zk-prover assignment

  • N-of-1 operator domains where DKG is unjustified

  • Long-term auditability (dual-signed VRF outputs)

And whether this should be pursued as a complementary building block, or whether even these domains are expected to eventually converge to threshold/DKG constructions as well.

For context, the prototype I am working on (already has:

  • dual ECDSA + ML-DSA-65 signatures

  • Kyber-based seed sealing

  • deterministic curve-free VRF construction

  • <20ms generation latency

  • on-chain verifiers for the classical part

But again — I see it as complementary to L1 randomness work, not competing with it.

Would love your view on whether this direction is useful, or if you think the long-term trajectory of the ecosystem pulls everything — even small trust domains — toward threshold protocols.

I wish I had found this thread sooner, would have saved me so much time lol. I wrote a solidity contract that does falcon-1024 verification and I even had a transfer on mainnet; transaction id is 0x22d89bb12e9f50b1c8b890733b5eda50f1be2ebcd8e4c598ba5bdbea73cbd520 (was not optimized for AA gas since it was a quick POC). My original contract used 40m gas but I got it down to just below 10m gas (but it includes signature extraction).

Why did you store the public key as an uint256 array and not an uint16 array? The storage costs are dramatically reduced.

I also went with a keccak variant but mine iterates keccak functions with the userOpHash, a domain, and the salt. I might look to replace what I have with your implementation though, I do know there’s room for improvement on what I’ve done.

One thing I did to reduce gas was do the NTT transformation on the public key when they are first loaded instead of everytime a transaction is verified; that might reduce your gas by about half a million.

I was going to wait until about April before I made my github public but maybe I’ll just do it sooner. It would be nice to compare notes.

1 Like

Thanks for sharing this — genuinely impressive work getting Falcon-1024 verification down to the ~10M gas range. That’s far below what I’ve seen in most experimental PQ-verification attempts on EVM. The NTT precomputation on first load is a smart idea — I hadn’t considered caching the transformed pubkey to avoid repeated polynomial domain conversions. That alone explains a huge part of your delta.

Re: public key layout
For ML-DSA-65 I’m currently only passing keccak256(pubkey) into the verification path (the full key never appears in calldata), so I didn’t have to store the raw array on-chain. But if I end up experimenting with Falcon, your point about uint16 vs uint256 is spot-on — the packing inefficiency becomes brutal otherwise.

Re: hashing
Right now I use a single domain-separated keccak256 hash for the control message because my use case is “proof-of-control for validator recovery” rather than signature aggregation, so the hashing pressure isn’t high. But I’m definitely interested in alternative absorb/mixing patterns — especially if they help with AA userOp flows or reduce calldata footprint.

If you’re planning to publish your implementation, I’d absolutely like to compare gas breakdowns and possibly benchmark ML-DSA vs Falcon under similar conditions. I can share test vectors, my verification flow, and some of the calldata minimization tricks I’ve been using.

Let me know when you push the repo — would be great to exchange notes.

P.S. I’ll take a look at tx 0x22d89bb12e9f50b1c8b890733b5eda50f1be2ebcd8… — curious how you structured the signature extraction and public-key loading path. If you have any pointers on where to look in the trace, that would be helpful.