I bet a lot of former miners and hobbists would happily run rigs dedicated to this (me included)
excellent post, very clear!
it can benefit from L1 preconfirmations once they are available
the month before this post, Ethereum had it’s ATH of preconfs with 37% of blocks containing at least 1 preconf in Jan 19th. So given preconfs are available, hopefully we can make it a powerful primitive for the Economic Zone and Synchronous Composability in general.
We at Primev have applied for early membership to work closer with this initiative!
I have a few questions. Sorry for any misunderstandings.
1. These state transitions
stateRoot₀ → stateRoot₁
are different from standard Ethereum State Transition Function (STF) induced state transitions, and the intermediate states à la stateRoot₁aren’t proper L2 state roots. For one, one would need to log stuff not directly related to the L2’s state, like the accrued state (address and storage warmths …), machine state (gas, pc, size in words of memory…), call stack and the stacks of all parent frames at the point when the call to C takes place, as well as a state snapshot “proper”, right ?
2. Is it true that the “state” transition proofs for the various L2’s would be very short lived ? In the sense that they are only valid for a given “next L2 block height” ?
3. If different operators were to create such transactions, I imagine they would be incompatible for inclusion in a given block ? How do operators co-operate to produce these L2 ←→ L1 transactions coherently ? I.e. in a time ordered fashion and with the correct starting states ?
While the technical roadmap for real-time proving is a feat of engineering, we must address the structural dilemma this creates for the Based model specifically:
1. The SBP (Searcher-Builder-Proposer) Syndicate:
Based rollups are designed to inherit L1’s decentralization, but synchronous composability across a Based cluster (e.g. EEZ) shifts the burden of multi-state awareness and real-time ZK-generation onto L1 Builders.
- The Risk: We are inadvertently creating a high-performance consensus cartel. If only a handful of elite builders can handle the 12-second race to simulate multi-rollup execution and generate validity proofs, we haven’t based the rollup, the L1 is just captured through the builder. This creates a Social Layer Disbalance where un-elected builders become the de facto governors of L2 state.
2. The Capital Security Parasite:
If Based clusters (EEZ) create a “Synchronous Premium” that keeps capital and high-velocity activity permanently in the secondary layer, we face an economic decoupling.
- The Erosion: The Based rollups inherit L1 security but potentially siphon off the MEV and transaction fees that fund the L1 security budget. If L1 is reduced to a low-revenue DA pipe while the "Synchronous Syndicate” captures the premium, the host (L1) may eventually become too weak to secure its own parasites.
3. The Nash Equilibrium Failure: We shouldn’t rely on social alignment or staff time to manage these risks. If a based system requires a syndicate to maintain its vibe, it has failed the Walkaway Test. We need an L1 that is Aggressively Neutral, where the protocol itself enforces a balance of power between L1 security and L2 efficiency, rather than letting a coordination layer dictate terms.
We are building a Sanctuary Technology, not a digital gated community managed by a new class of professional builders.
This is a great set of questions — and the follow-up concern about builder centralization is especially important in this model.
Reading both threads together, there are really two layers of issues emerging:
-
Technical constraints
- ephemeral, slot-bound proofs
- non-canonical intermediate “state” representations
- coordination requirements between operators
-
Structural risks
- concentration of power in high-performance builders
- reliance on real-time proving infrastructure
- coupling between execution, proving, and inclusion
A common thread
Both sets of concerns seem to converge on a deeper question:
what is the canonical, independently verifiable reference for what actually happened?
Right now, that role is effectively played by:
- the validity proof
- the execution table
- and the sequencing context
But all three are:
- tightly coupled
- time-sensitive
- and dependent on specialized infrastructure
A possible separation
One way to reduce this coupling is to separate:
- proof of correctness (zk / execution layer)
from - commitment to resulting state (reference layer)
The execution table already represents:
a complete, ordered record of cross-domain state transitions
Instead of existing only inside the proving pipeline, it could also be treated as a commitment artifact:
- hash the execution table (or its root)
- anchor that digest on-chain at the relevant slot
- make it independently referenceable
Why this matters for the concerns raised
For the technical questions
-
Ephemeral proofs
→ paired with persistent, slot-anchored commitments -
Non-standard intermediate state
→ commitment does not require canonical STF compatibility
→ only a stable digest of observed transitions -
Operator coordination
→ coordination produces a single committed artifact
→ not just a proof tied to a specific builder
For the structural risks (builder centralization)
If:
- only a small set of builders can produce valid proofs in time
then:
they effectively control what becomes “truth” in the system
A commitment layer introduces a subtle but important shift:
- builders may still compete to produce proofs
- but the resulting state becomes a publicly anchored artifact
- independently referenceable and verifiable
This reduces reliance on:
- who produced the proof
and strengthens:
- what was actually committed
Framing
This does not replace realtime proving or shared sequencing.
It introduces a complementary layer:
commitment to observed state as a first-class primitive
I have been exploring this more formally here:
Observation Commitment Protocol (OCP) v1.0.0 - #3 by DamonZwicker
At a high level:
- execution → produces state
- proof → establishes correctness
- commitment → anchors the result
Open question
If execution tables already represent the full cross-domain state transition:
should they also be treated as canonical commitment objects, not just inputs to a proving system?
It seems this separation could help reduce both:
- technical fragility (slot dependence, replay complexity)
- structural risk (builder concentration as a gatekeeper of truth)
Curious how others thinking about realtime proving view this distinction.
A band aid on the splashing open wound called Proof of Stake. Or in other words: The richer getting more powerful and richer.
This design gets very close to something important, but there’s a missing layer that becomes visible at this level of abstraction.
The execution table here is already doing more than it’s being treated as.
It isn’t just an internal mechanism for proving. It is implicitly:
the complete, ordered representation of what happened across domains — but it is never given a stable identity outside the system that produced it.
If that’s the case, then the natural question becomes:
what is the stable reference for that object?
Right now, the execution table only exists:
-
inside the proving pipeline
-
inside L1 state for proxy resolution
-
tied to the system that produced it
Which means there is no independent reference to the execution itself.
Verification requires:
-
access to the execution table in state
-
understanding the proxy mechanism
-
reliance on the proving system
So even though correctness is established, verification remains:
-
system-bound
-
non-portable
-
dependent on the original execution context
But the table already contains all the information required to define a stable reference (not just the resulting state root, but the execution trace itself).
At that point, a minimal step follows directly:
-
hash the execution table (or its root)
-
anchor that digest on-chain at the slot
-
treat that digest as the reference for the execution
Now verification reduces to:
-
recompute the digest
-
compare to the committed value
-
confirm inclusion
independent of:
-
the proving system
-
the proxy contracts
-
or the builder that produced it
Without this, the system proves execution but never produces a stable, referenceable artifact of that execution.
With it, you get a clean separation between:
-
proving correctness
-
and referencing what actually happened
This becomes especially important in a model where execution, proving, and inclusion are increasingly coupled—because without a stable reference, all three remain tied to the same system boundary.
One thing to make explicit here:
What I’m pointing at with the execution table is essentially a missing, referenceable artifact for execution itself.
This is the layer I’m trying to formalize with OCP (Observation Commitment Protocol) — a minimal way to bind:
-
the observed execution (or its representation)
-
its digest
-
and an on-chain commitment
so that verification reduces to:
recompute → compare → confirm inclusion
without depending on the proving system or execution environment that produced it.
The goal isn’t to change how execution is proven, but to make what was executed independently referenceable and verifiable across systems.
There’s a minimal reference implementation of this verification model here for context:
https://github.com/damonzwicker/observation-commitment-protocol