Native rollups—superpowers from L1 execution

RISC-V native execution

Given today’s de facto convergence towards RISC-V zkVMs there may be an opportunity to expose RISC-V state transitions natively to the EVM (similarly to WASM in the context of Arbitrum Stylus ) and remain SNARK-friendly.

I would be quite wary of that, RISC-V (and WASM and MIPS) are bad ISA for bigint operations and elliptic curve cryptography. As an example emulating add-with-carry in those ISAs requires 5x more operations compared to x86 or ARM:

result_tmp = a + b
carry_tmp1 = result1 < a
result_out = result1 + carry_in
carry_tmp2 = result_out < result1
carry_out = carry_tmp1 OR carry_tmp2
return (carry_out, result_out)

Due to this inherent inefficiency, all Risc-V zkVMs are implementing a “dialect” (to reuse an MLIR compiler term) of RISC-V modified with native uint256 and elliptic curves ISA extensions.
As there are different approaches to this with different tradeoffs, we should leave the complexity to the zkVMs, it’s fine to have a RISC-V + EVM dialect (or MIPS+EVM for zkMIPS or Valida+EVM for Valida, WASM+EVM for zkWASM)

5 Likes

ZK fraud proofs can alternatively replace traditional fraud proofs to demystify the need of bisection games.

1 Like

It’s worth looking into the new Polkadot VM which runs natively RISC programs and could offer performance and arbitrary support of native or custom EVMs running on top of it, without the complications of a ZkVM.

I think that an Execute precompile that enforces that all Native Rollups are Mainnet EVM makes sense, but it’s not for the benefit of the ecosystem. Rollups should be empowered to experiment and push the performance envelope, without needing this backwards compatibility. Having a substrate in RISCV or WASM on which you can run an EXECUTE for different types of STFs (Mainnet EVM, Arbitrum WASM, OP EVM, etc.) would be a superpower for the ecosystem.

6 Likes

This makes sense in the longer term but this won’t work with a simple re-execution EXECUTE opcode.

1 Like

Thought I would share here for reference in the conversation for what this could look like in practice. I’ve implemented the most naive version of the EXECUTE precompile for the ethereumjs EVM at github dot com/ethereumjs/ethereumjs-monorepo/pull/3865 (sorry, can’t post links) using the binary merkle tree structure from EIP-7864 as the backing tree representation of the state.

State transition verification here is implemented using a sparse state tree constructed from the prestate merkle proof and then transactions are re-executed. The traces are currently stored as ssz serialized bytestrings in local memory accessible to the EVM via a hash provided to the precompile.

4 Likes

Well, I think it is incorrect to say that this will have the same security as the main net. The state will be stored in centralized way. If the centralized entity that stores the state decides to withhold it, users will lose everything.

I think the rollup idea got to the grotesque point where there is one buzz word coming after another. The reality is users dont want this. They want Ethereum mainnet to be faster, it is incredibly slow.

2 Likes

The native rollup proposal is technically flawed and impractical.

The core issue is composability: Ethereum’s strength is that any contract can call any other, across layers and extensions. Splitting the STF between “vanilla” and “extension” modes fractures this. Contracts would lose the ability to call seamlessly across execution environments, undermining both developer ergonomics and user experience.

Some specific examples:

  1. Bridge flows: Trustless deposits and messages from L1 to L2 rely on transactions originating from L1 contracts, which are unsigned. The bridge contract provides the authentication, not a signature.
  2. Gas accounting: Rollups must surcharge transactions for calldata costs (DA fees) and often use dynamic pricing models tied to timestamps or block production, not just gas markets like Ethereum L1.
  3. Precompiles: Rollups rely on precompiles for critical functions like reading L1 state, determining DA gas prices, and processing batch metadata. These are invoked mid-execution, not just as top-level calls.
  4. Custom transactions: Users regularly interact with L2s via L1-originated messages, a pattern that requires STF-level handling beyond what vanilla Ethereum offers.

If rollups were forced to use Ethereum’s native STF, they would lose these capabilities:

• No trustless bridging.
• No native DA price discovery.
• No composability across vanilla/extension STFs.
• Poor developer and user experience.

Composability cannot be compromised. Ethereum’s strength has always been the “everything is a contract” model, where contracts can call each other freely and nest without restrictions. Splitting the STF into “vanilla” and “extension” modes breaks this guarantee, creating isolated execution environments that cannot interoperate seamlessly. This violates one of Ethereum’s core principles.

Issues Proposal Assumptions Technical Reality
Transaction types Rollups can use Ethereum-native transactions Rollups need special transaction types (e.g., unsigned bridge transactions)
Gas accounting Ethereum’s gas model suffices Rollups require surcharges and dynamic models for DA
Precompiles Native EVM precompiles are enough Rollups depend on custom precompiles for DA pricing, batch metadata, and L1 state access
STF structure Split STF is fine: vanilla vs. extension mode Split STF breaks composability between contracts
Composability L2 contracts don’t need full composability with L1 Composability is essential for UX and safe, flexible contract interactions

This reflects a broader pattern: the EF seems to prioritize theoretical ideas over practical designs for devs and users. Meanwhile, ecosystems like Solana prioritize clear messaging, usability, dev support, and fast iteration — and the results speak for themselves, even if much of the activity revolves around low-quality applications like meme coins.

1 Like

Most of these (except your 2nd example) could be implemented with a single extension on top of Ethereum’s native STF, the L1ContextPrecompile, which can be used to pass arbitrary data from the L1 rollup contract to the L2 EVM. See Native Rollup Deposits by Passing L1 Context.

1 Like

Hmm, ok I need to think about that. @edfelten do you have any thoughts on this L1ContextPrecompile idea? Inna

I don’t think it’s enough to let the L2 see the L1 state. That covers L1-to-L2 messaging functionalities, but it doesn’t come close to supporting the range of L2 customizations and differences from L1 that already exist on production rollups today.

3 Likes

Totally agree. It’s ironic that increasingly complex solutions are being discussed when users neither need nor ask for them.

Ethereum Layer 1 could be scaled to over 1,000 TPS simply by improving the consensus and execution layers. The state growth problem can be trivially solved by using parallel state database shards across multiple SSDs and VM clusters.

Maybe Justin can clarify what specific user problem he’s targeting? L1 users want a faster L1. Rollup users are happy with Base. Is it really expected that they’ll switch from Base to yet another new system—just because of technical details like precompiles or ZK?

Usually, when people propose a solution, they start by explaining the problem. The proposal above is ironically exactly what Paul Graham calls a solution in search of a problem.

“The way to get startup ideas is not to try to think of startup ideas. It’s to look for problems, preferably problems you have yourself.”

Let’s start with the real problems. The Ethereum mainnet is dying—people are moving to other networks. Transaction fees are collapsing. “Ultrasound money” is gone. Rollups, except for Base, are dying. Base is generating significant revenue but contributes nothing back to Ethereum mainnet. Base is 100% centralized. These are real problems, not academic ones. How are native rollups going to help? Are they just going to end up like ZK-rollups—promising tech that no one is actually using?

“Native rollups – superpowers.” What exactly is the superpower? Weren’t previous things also supposed to be superpowers? Maybe it makes sense to do a retrospective and see why so many things in the past failed?

1 Like

Thanks! That is what I thought as well, and what I tried to capture in my post.

I agree with you. I don’t that native rollups work for the reasons that I set out in my post above. I’m glad that @edfelten agrees. Maybe @JustinDrake can give his thoughts?

I agree with you @kladkogex about a solution in search of a problem. For me, Ethereum is about security and privacy. The best product market fit is stablecoins and Ethereum captures this best of all chains. In the future, I think that Tether users with larger amounts will move over to Ethereum. Ethereum security == kind of like what you want from a bank. No one in their right mind are going to put a lot of money somewhere you need to trust Justin and that Italian plastic surgeon guy.

Base is a vampire chain. The problem is that Ethereum is run by geeks and nice guys. Basically, Ethereum creates the best Web3 infra and then gets incentives wrong and people take advantage of it. Like basically free Web2 infra and then Facebook gets the foundation for free.

I know that you shouldn’t love something that can’t love you back, but I’m kind of in love with zk, for the same reasons as Ethereum: security and privacy. I think that moving to zk is inevitable, especially if PQ secure. I like @vbuterin’s idea here: Long-term L1 execution layer proposal: replace the EVM with RISC-V - #30 - Primordial Soup - Fellowship of Ethereum Magicians

But, need to get incentives right or just building for others, as you say.

@jdetychey’s value capture idea here is great: Addressing Ethereum value capture - Primordial Soup - Fellowship of Ethereum Magicians

But, voluntary. If Base wanted to contribute voluntarily, there is nothing stopping them.

I run an EMI in the EU and I’m working on a zk-based L2 that will merge TradFi and crypto and actually bring users and money to Ethereum. It uses stablecoins in 27 currencies and the API we are building will allow Ethereum devs to connect seamlessly into real bank accounts, issue cards for their apps, etc. Real problem = send money cheaper, faster, safer than TradFi.
Creating junk like Fartcoin or pump.fun != solving a real-world problem.
And, since L2s and their apps can make so much money, it is very reasonable to pay a fair share to Ethereum.

1 Like

Regarding the gas accounting issue, this is a problem that rollups incur because of their desire/need to spread batches across L1 blocks and manage costs in self-determined ways. Native rollup gas accounting could be straightforward in this native rollup future:

  1. For DA costs, with upcoming features such as blob sharing, and higher blob and rollups counts, we can start to assume blobs will be full and the per-byte DA cost of transactions within blobs are the same. A per transaction DA fee can conceivably be incorporated into blobs.
  2. All native rollups are calling the same execution function, which should make the execution costs standardized across all rollups and rollup transactions. The respective rollup extension/derivation functions might create slightly different overheads here, but these should be small constants per batch, or calculable multipliers per-transaction.
  3. Proving costs are baked into execution costs in this native rollup future.

All of this is to say, native rollup transaction gas accounting has a viable path to being equivalent to L1 gas accounting.

Dedicated non-based sequencer rollups offering preconfirmationsare the main losers in this scenario. A few non-exclusive options if gas accounting inline with what is done for current centralized sequencer rollups can’t be added:

  1. Leverage FOCIL and inclusion preconfirmations to the max.
  2. Have a mechanism for filtering out transactions not paying enough if gas costs change because an L1 slot was missed, i.e full or partial reorgs of the preconfirmed chain.
  3. Don’t be native.
1 Like

For DA and proving costs, the tricky part is correctly attributing them to specific L2 transactions. But assuming constant per-byte DA cost and per-gas proving cost in the near future seems reasonable to me. (Compression makes this more difficult but that’s a different topic.)

Do you have a rough idea in mind you could share?

As you pointed out, with blob sharing we do not need to care about the overhead of blob underutilization. And for based native rollups, the L1 blob base fee is predictable, so the L2 sequencer could maybe derive the L2 base fee from it.

But what about data heavy vs computation heavy transactions on L2? If one transaction would take up 50% of the blob, it should pay a proportional fee (a function of the L1 blob base fee). The current L1 fee mechanism (assuming it is adopted by native rollups as is) seems unsufficient for this, though EIP-7623 is a step in this direction.

1 Like

Thank you for this lovely writeup! I realize I’m a bit late in responding. My investigation of this topic was largely inspired by discussions with @mteam88 – but all mistakes here are of course my own.

What exactly do you mean by “synchronous composability”? If you mean “synchronous, atomic cross-contract calls” (as, e.g., in Spire’s docs), then I don’t see how EXECUTE provides that, at least without drastic changes to the execution layer (namely, real execution sharding).

Let’s first fix a definition of “synchronous composability” which is sufficiently clear that we can actually reason about it. By “synchronous composability” [over some state], I mean that I can make transactions over that state which involve cross-contract calls, where the state changes made by one contract may depend on the results of a synchronous call to another contract (as, e.g. the EVM itself allows for smart contracts just running on the L1). Synchronous calls without the possibility of state dependency don’t seem very useful to me (and wouldn’t e.g. allow the kind of composability which exists in Ethereum DeFi), so I’m assuming that this is the capability which is sought – but if a different definition is being used here, please correct me.

In order to provide this kind of synchronous composability, the actual state changes must be able to depend on all of the state which could be involved (from the perspective of a single contract: the other contracts to which you could send synchronous calls to, the results of which might change your own state changes). As defined, EXECUTE takes a fixed post-state-root (post_state_root) and execution trace (trace), which means that either:

  1. This trace and post-state-root are computed beforehand by some party who does not know the final transaction ordering or results (e.g. the user making the transaction), and thus cannot definitively determine what the results of synchronous calls would be → no synchronous composability.
  2. This trace and post-state-root is computed in the EVM itself, which would require simulating the EVM in the EVM – not computationally feasible, and I’m assuming not the intention here.
  3. This trace is computed somehow after the ordering of transactions has been determined, after the results of the previous transactions in the block have been computed, outside of the EVM, but before the results of the next transactions in the block are computed. This would allow for synchronous composability, but L1 execution – at least as it exists today – has no facility to do this, or at least none that I could see described here. Furthermore, scaling this – so that execution is not bottlenecked by a single validator re-executing transactions – requires, well, execution sharding, which you seem to want to avoid here.

What am I missing?

4 Likes

I believe it’s easiest to reason with 3.

If the a property we at Spire call coordinated sequencing is present, the shared block builder can run a simulation and determine the correct post state root. This likely only applies to a based rollup, so there are absolutely theoretical native rollup implementations that do not have sync composability with L1.

Does that clarify one potential way to achieve this?

It’s worth noting that this block builder doesn’t necessarily need to be a single centralized entity and could be distributed. A SUAVE-like construction comes to mind for me.

If the shared block builder can run a simulation and determine the correct post state root, then the shared block builder must either be the L1 proposer (who makes the final ordering decision) or have some sufficient guarantee that the call results will be such-and-such – namely, an execution preconfirmation, which requires that the L1 proposer do execution (which is fine, but then there are no scaling benefits), so I don’t really see how that helps.

Distributed block building where different parts of the blocks make synchronous calls across different parts of state is certainly possible, but this is just yet another rebranding of execution sharding (mixed in with ordering decisions) – in order for the system to be efficient, you have to coordinate which ordering and execution happens where so that state which needs a lot of synchronous composability is colocated.

2 Likes

i think there’s a distinction to make here between two scenarios:

  1. You know for sure that your transaction will execute successfully and that it’ll successfully handle atomic interop
  2. You know for sure that your transaction will either fully revert or that it’ll successfully handle atomic interop

For many interop use cases, the gas cost of a failed transaction is negligible - especially if it is ‘gas abstracted’ and can be replayed successfully later at no extra gas cost to the user. The bigger issue - and what many people are focused on - is that if the interop doesn’t revert then it must behave as expected… otherwise you get exploit potential, double-spend, etc.

Obviously point 1., above, is a better goal than 2. But 2 is quite achievable today, and with gas abstracted UX being the new normal for apps in the EVM landscape, most users won’t be able to perceive a difference between 1 and 2.

3 Likes

That’s a fair distinction to make! But if (2) is what you’re looking for, you don’t need native rollups, based rollups, or really any specific sequencer construction whatsoever (although different ones might make reversion more or less likely) – you can just simulate all the calls beforehand (without any real synchrony) and revert on execution if they fail.

3 Likes