Based preconfirmations

For the past couple of weeks, I and a couple of my team members have been diving into all things required to build a POC of based preconfirmations (EigenLayer, PBS, Based rollups & more). I guess it is obvious from this thread that based preconfirmations are a complex matter. We’d like to share our thoughts and further research avenues we ended up with.

Obstacle 1: How do you even talk with preconfirmers

Preconfirmations require the users to be able to discover upcoming preconfirmers and ask them to commit to a preconfirmation. As preconfirmers are validators: “You dont talk to them. They talk to you.”. Think of mev-boost and relays. Builders dont talk directly to validators, they tell their blocks to relayers and the relayers get asked by the validators for blocks. This adds security & privacy properties to the validator that [I guess] they would not want to lose.
So if users are to talk with validators for preconfirmations, a pull mechanism needs to be designed (similar to relays) for validators to pull preconfirmation requests and possibly honour them.

Obstacle 2: Who do you even talk to

Lets ignore the previous obstacle for a while. While the general thought process is one where the user connects to a preconfirmer - which preconfirmer exactly? One idea would be to talk with the next pre-confirmer available, but you are at the mercy of their response. What if this preconfirmer censors you or it is down? You can wait for a certain timeout and try the next one. And the next one. You get the idea. With every subsequent timeout you are getting worse UX.
Another approach would be to talk to the next X (lets say 16) preconfirmers in parallel and wait for the first commitment. This approach however is wasteful as all but one of these commitments will be used - and waste drives prices up.

Obstacle 3: Preconfirmations validity rules

Spoiler - we looked at PEPC for some inspiration and ideas for solutions. Preconfirmations look like a really good fit for the “generalized mev-boost” - PEPC. However, PEPC doc briefly mention something called payload template. While payload template likely makes sense in many use cases its details are going to be crucial for preconfirmations.
While many generalised use cases might require for a certain types of transactions to be a part of the transactions list, with based preconfirmation it is quite trickier. You need to enforce/validate that an L2 transaction is included in the sequencing transaction. This means a subsection of the calldata of a single transaction contains the pre-confirmed transaction.

Add to this:

  • Data compression - so the payload is even harder to enforce
  • Danksharding - what data… its not even here
  • Unknown permissionless sequencer - “I dont even know who to expect the sequence transaction from”
  • Complex sequencing pipeline through multiple contracts - “I need to look into the trace”
  • More than one preconfirmation per sequence - order matters

In PBS world where the preconfirmers are separate from the block builders, the proposers would need a mechanism to pass very complex templates for builders to honour.

Some of our thoughts

These are just some of the nasty edge cases of the current state of preconfirmation and the hardship of their implementation in the current state of Ethereum.

Obstacle #1 lead us to think of an architecture looking like a generalized mev-boost. Fortunately EF researchers are some months (years? :smiley:) ahead and PEPC is a generalized ePBS architecture. With PEPC architecture one can have a communication channel and validity rules in the protocol itself. Without PEPC numerous trust assumptions need to be introduced even if you are building a simple PoC.

Obstacle #2 leads us to think about the efficiency of talking to preconfirmers. Is it even viable to have preconfirmers? Various practical issues can lead to a worse UX for the user.

Obstacle #3 lead us to think about the complexity of enforcing sub-section of a transaction. A responsibility that will likely be passed to builders.

Research avenues

We believe that the following are research avenues that need to be further looked at to explore possible preconfirmations solutions.

  1. Intersection of pre-PEPC state of PBS (optimistic relay) and preconfirmations. Is optimistic preconfirmations-enabled relay feasible? Intersaction of pre-PEPC state of PBS with EigenLayer
  2. Current state of research for Inclusion lists and its ability to enforce transactions based on complex template - required for forcing inclusion of sequence transaction including one or multiple preconfirmations.
  3. How can a complex template be designed and implemented
  4. Based on the previous findings - how does one design based-preconfirmations
4 Likes

This makes it seem like we are expecting pre-confirmations would be requested by some sort of aggregator, who requests pre-confirmations for entire state changes. I would have thought “The weakest form of promise” of only pre-confirming one transaction would be the most common use case.

transaction validity is recommended to be tied to the preconf condition

What does this mean?

1 Like

Tokenlessness is not a feature, is a bug. To quote from this post on restaking (it’s worth a read):

One of the main selling points of tokens is to bootstrap something which doesn’t have network effects or a business model yet. By stripping away your token and using base ETH instead, you get rid of the bootstrapping effect because you remove the ability of the protocol to print inflationary rewards in exchange for security / activity.

Besides this, creating a token is the only way many blockchain startups have of raising capital. Which you need in order to actually build a product.

On the point about economic security, as was said by other posters, the security of my preconf is secured only by the stake of the preconfirmer. It’s irrelevant if that preconfirmer is staking ETH or something else.

On the point about economic security, as was said by other posters, the security of my preconf is secured only by the stake of the preconfirmer. It’s irrelevant if that preconfirmer is staking ETH or something else.

The point about economic security is related to censorship resistance of sequencing, not security of preconfirmations. Using a non-ETH token for sequencing will dramatically lower real-time censorship resistance.

1 Like

Apologies if it’s obvious, but I don’t see as using ETH for sequencing will improve censorship resistance. Could you explain in more detail?

I don’t see as using ETH for sequencing will improve censorship resistance. Could you explain in more detail?

The L1 serves as a inclusion list for rollups, to provide censorship resistance. Without based sequencing the best you can have is delayed forced transactions (aka an “escape hatch”). For example, on Arbitrum there’s a 24h delay between when a transaction is included on L1 (bypassing the Arbitrum sequencer) and when that transaction is forcefully executed on the Arbitrum execution environment.

With based sequencing there’s no need for a delay: if a transaction goes on L1 it’s safe to compel the next sequencer to execute the transaction by their slot, without delay. I explain in the original based rollup post (see here) why escape hatches are bad design:

Without based sequencing the best you can have is delayed forced transactions (aka an “escape hatch”).

That’s not always true: OP stack chains include forced txs via L1 immediately in the next L2 block in the order they appear on L1 (in particular, in the first portion of the first L2 block, right before L2 sequenced txs).

You could say that Bedrock already uses based sequencing but just for forced transactions.

I agree with @donnoh here. There’s no reason for an escape hatch to have a long timeout, that timeout can be arbitrarily small. The disadvantages that you mention are true of older escape hatch designs, but they have evolved quite a bit meanwhile (and it is a big design space, there’s more room for improvement).

I don’t see how this can be compatible with unconditional preconfirmations on state. Let’s assume that a transaction T was force-included at L1 at slot n and the OP sequencer starts giving out unconditional transaction preconfirmations (on post-execution state, not just inclusion) which assume the execution of T at slot n. Now if slot n is reorged (e.g. via a depth 1 reorg) the execution of T may change, itself potentially invalidating the unconditional transaction preconfirmations.

@donnoh: Do you have a link to how OP does immediate forced inclusions, and how they make them compatible with preconfirmations?

I guess OP preconfirmations are weaker than Arbitrum’s. The spec of the transactions list derivation can be found here.

@bruno_f: I think I concede your point—it’s possible for non-based rollups to enjoy the full censorship resistance benefits of Ethereum L1, all while enjoying preconfirmations :slight_smile: Thank you for bringing this up—a significant update to my mental model! If liveness is not one of the fundamental advantages of based sequencing then I guess the two big remaining advantages are a) credible neutrality, which is critical for a shared sequencer and b) L1 compatibility, necessary to have synchronous composability with L1 contracts and $0.5T of assets.

I think my favourite preconfirmation design so far is preconfirmations that are conditional on the L1 state. That is, if the L1 reorgs then the corresponding preconfirmations no longer apply. I believe these L1-conditional preconfs work fine because a preconfirmer at slot n can offer two types of preconfirmations in parallel: preconfirmations assuming the block at slot n-1 doesn’t get reorged, and preconfirmations assuming the block at slot n-1 does get reorged (e.g. falling back to the block at slot n-2 being the parent block). There can also be an insurance market to hedge users against reorgs (which should be rare, especially with single slot finality).

1 Like

Actually I take that back :see_no_evil: If an L2 transaction is force-included at L1 and the L2 sequencer has been compromised (e.g. so that the sequencer is no longer settling anything) then that L2 transaction can’t execute until after some timeout. This timeout can’t be made arbitrarily small because otherwise that would completely break preconfirmations. To summarise, based sequencing gives the same settlement guarantees as the L1 (no need for a timeout to force settle).

1 Like

Yes, while you can have preconfirmations on inclusion with based forced transactions and L2 sequencing, if the sequencer halts or doesn’t post batches in order (OP derivation window is 12h!) then preconfirmations on post state break.

It’s not just preconfirmations that break. You also suffer a liveness failure during the derivation window.

1 Like

@JustinDrake I just saw your recent episode on Bankless. I understand your vision on this better now. Yes, if we have shared sequencing and real-time settlement then we can have synchronous composability. And that would be amazing.
The problem is that real-time settlement is only theoretical for now. AFAIK it is 5+ years away, maybe more. With shared sequencing alone, we can only do atomic inclusion, not atomic execution. So no synchronous composability. All of this is known to you.
Shared sequencing does come with a some big drawbacks (loss of sovereignty, loss of sequencing revenue, more complexity). But by itself it only has one advantage: no need to code your own consensus and maintain a validator set. But to be honest, a based rollup (the one described in the original proposal) is a better option if someone really doesn’t have the resources to build/maintain a consensus layer.
It doesn’t make sense for a decently-funded rollup team today to use a shared sequencer, in the hope that it will pay off 5+ years down the road. Better to build your own consensus today, have asynchronous composability with bridges and if real-time settlement becomes a reality then change to shared sequencing.
I really think you’re underestimating the importance users/devs place on sovereignty. Two examples. First example, Polkadot and Cosmos. Polkadot’s model has all chains sharing the same validators, while Cosmos has sovereign chains and just handles inter-chain communication. If you compare the two ecosystems, Cosmos has bigger projects and more users. Second example, Celo has signaled their intention to migrate to a L2 and one of their hard requirements is exactly the ability to keep their existing validator set.

2 Likes

How should the case where the previous preconfirmers block is re-orged be handled?

E.G block X was the L1 block of the previous preconfirmer, the next preconfirmer is at X+5 and has issued post-execution state root preconfirmations. Now block X gets re-orged and the preconfirmer at X+5 is left with a bunch of pre-confirmations that they can not honour and it’s not their fault.

How should the case where the previous preconfirmers block is re-orged be handled?

I think we don’t need to do anything special to handle reorgs? The next preconfer will have a monopoly over the sequencing of the L2 txs, so even if there is a reorg, no other entity can insert L2 txs that violate the state root preconfermations.

I don’t think this is correct unless you are implying that the next preconfer is forced to not reorg the previous preconfer which is impossible to do without something akin to SSF. Imagine a world in which every L1 validator is a preconfer. Then either L2 has reorgs or L1 does not have reorgs.

Hmm, it is still unclear to me how a safety slashing attack, where the preconfed state is invalidated, could happen via reorgs. For a reorg to invalidate the preconfirmed state of a preconfer, either the L2 tx ordering in that preconfer’s assigned slots changes, which won’t happen since the preconfer has monopoly over ordering in their slots, or the tx ordering in some slot prior to the preconfer changes. But for a previous slot’s preconfer to change their L2 tx ordering, they must have signed two conflicting txs at the same position. And I think the slashing mechanism would/should only slash the earliest invalid preconf in terms of L2 tx ordering.

But if we are talking about a liveness slashing attack, where a preconfer’s slot is forced to be missed via a reorg, I agree it is possible. I think it can be mitigated to some extent by having liveness slashing less severe than safety slashing (which should be so anyway as missing a slot can happen by accident). And there is also below idea of having some insurance market by @JustinDrake:

Liveness and safety slashing offenses are completely equivalent, if I hold two consecutive slots as a preconfer, I can “be offline” for the first one and do whatever I want to the second one, just paying the liveness slashing on the first one.