Synchronous Composability Between Rollups via Realtime Proving

Excellent post!

I absolutely love this mechanism because it would greatly increase the censorship resistance and security of interacting with rollups. It would offer a trustless avenue to interact with an otherwise-trustful rollup: as an example, I could issue an L1 transaction that atomically bridges funds to the L2, mints an NFT, then bridges my NFT and funds back to L1. No security council could ever rug me: if they attempt to change the root or verification key my precheck would fail and my transaction bundle would simply revert. This is incredibly powerful.

That said, I have some questions.

Shouldn’t the caller need to be the one to submit a valid proof, and not the block builder?

Problem one: I am a completely vanilla, non-MEV-Boost local self-building proposer with no proving infrastructure. Presumably with no proofs provided to me there would simply be no synchronous composability in my slot and everyone would need to wait.

Problem two: if we go with some sort of intents system where a builder generates the proof for the user, the builder could censor the user. It’s not like FOCIL could compel a malicious builder to generate a proof.

What I would hate more than anything is for this mechanism to turn into one where synchronous composability is left out of the public mempool and ends up getting privately routed to the builder duopoly.

Or even worse, where that happens and this mechanism replaces the kinds of forced inclusion mechanics that L2s currently support (I can foresee a lot of complication in making this work with conventionally-force-included transactions). That would be a dark future where forced inclusion becomes much more expensive for a caller.

There is the separate problem that some provers are closed source, which might further restrict who in practice is actually able to sequence some rollups. Although that’s a separate problem limited to the specific rollups in question so this mechanism doesn’t really have to have an opinion on how to solve that.

2 Likes

I have been through both @jbaylina’s reference implementation and @mkoeppelmann’s POC. Some observations on the mechanism design and code.

1. On Execution Table Privacy and Builder MEV: Why commit-reveal doesn’t help (yet)

@TimTinkers raises a critical concern that if the builder is the one generating proofs, doesn’t synchronous composability become another private channel to the builder duopoly? @tbrannt’s DOS question converges on the same structural issue.

My first instinct was that a commit-reveal scheme could help: force the execution table submitter to commit to a hash before revealing the content, preventing builders from inspecting and reordering user execution tables for MEV extraction.

But after tracing through both implementations, I realized this doesn’t work in the current architecture, and the reason is instructive:

The builder is the execution table constructor. In Rollups.sol, loadL2Executions() takes pre-computed executions with a ZK proof. In koeppelmann’s NativeRollupCore.sol, registerIncomingCall() and processSingleTxOnL2() serve the same role. In both cases, constructing the execution table requires:

  1. Knowing the exact L1 state the transaction will execute against
  2. Simulating the full L2 execution
  3. Generating a validity proof

Only the block builder/proposer has (1) - they know their pending block’s state because they’re building it. So the builder must construct the table, which means they already know its contents. Commit-reveal would protect the submitter from the builder, but the submitter and the builder are the same entity.

This becomes a problem when synchronous composability enables cross-domain MEV that doesn’t exist today. In single-chain MEV, the builder’s advantage is limited to transaction ordering within one domain. With execution tables, the builder can atomically arbitrage price discrepancies across rollups, something currently requiring multi-block bridge strategies with settlement risk. The execution table is strictly more valuable than a single-domain mempool.

Commit-reveal would help under a decentralized prover market where users or third parties can independently construct and submit proven execution tables. This is architecturally possible (loadL2Executions() is permissionless), but requires proving infrastructure to be widely available which brings us back to @jbaylina’s point about the hardware requirements being non-trivial.

The open question is about what the path from “builder constructs everything” to “users can submit their own proven execution tables”? Proof delegation protocols (where users can outsource proving without revealing execution intent) might be the missing primitive.

2. Execution Lookup Cost at Scale

In Rollups.sol, _findAndApplyExecution() does a linear backward scan over all executions stored under a given actionHash. In production, different builders would load speculative executions for the same action across different state snapshots. This makes lookup O(n) per execution.

Keying executions by keccak256(actionHash, currentStateRoots[]) would make lookup O(1) at the cost of more complex key construction during loadL2Executions(). The execution-time savings would dominate.

3. Stale Execution Cleanup

Pre-loaded executions that can no longer match (because rollup state has advanced past their currentState snapshots) persist in storage indefinitely. Without an expiry or cleanup mechanism, the _executions mapping accumulates dead entries. Adding a blockLoaded field and allowing permissionless cleanup after N blocks would mitigate this.

4. Failure Mode Granularity

Looking at _handleScopeRevert(), the revert propagation cleanly restores rollup state roots and continues via REVERT_CONTINUE. This gives all-or-nothing semantics at each scope boundary, which is correct for the general case.

For multi-domain settlement operations (oracle read on L1, compliance check on L1, payout on L2), strict atomicity means the entire operation fails if any single rollup is temporarily unavailable. A TRY_CALL action type where the caller gets (success, returnData) back and can branch rather than forcing a revert up the scope chain would enable graceful degradation. The execution table would need to encode both success and failure paths which increases proof complexity but hopefully enables partial atomicity boundaries.

5. Cross-domain Test Coverage

I found that the test suite thoroughly covers single-rollup execution flows, but the core value proposition atomic multi-rollup cross-domain calls isn’t tested yet. A test like Rollup A calls L1, which calls Rollup B, result propagates back and influences Rollup A's state transition would be the canonical integration test. Happy to contribute one.

For context, I have been analyzing this from the perspective of cross-chain parametric settlement and risk assessment. So digging into specifics of how this would work for things like insurance, derivatives, etc. specifically atomic cross-chain settlement and real-time compliance verification. I shared some thoughts on this in a recent Gnosis forum response.

2 Likes