Based rollups—superpowers from L1 sequencing

And we call it native rollup.

2 Likes

This is a really interesting idea. One question here is where do L2 clients send transactions to, so that searchers / builders can then make up L2 blobs? Are they sent to the L1 mempool, with some special metadata that “informed” searchers can interpret? Is this going to significantly increase load on the L1 mempool (if L2 transaction load is going to be 100x L1 transaction load)? Another possible approach is to let these L2 transactions live in new p2p mempools (one for each L2) that “informed” searchers can then fish them out of.

9 Likes

This is super cool as a concept. Thanks for sharing.
I do have a question though.
You mentioned L1 searchers and builders will collaboratively include the rollup block with the next L1 proposer. Does that mean they will be also in charge of sequencing and compression of the sequenced transactions, which then get included in the next L1 block? Mainly curious about how the block compression happens without introducing additional roles. Because without the proper compression, L2 will not achieve further scalability in throughput and gas-saving, which is what they are built for.

2 Likes

Daniel from Arbitrum gave an answer to the compression question here for anyone interested:

2 Likes

Here is also a comment on ZKP for compression with Based sequencing:

1 Like

Fun fact: I actually thought this is how all rollups work until about a year ago :smiley: I guess because it’s a quite inutitive solution.

I have a question about the “anarchy” part. You said that wasted computation can be avoided by the based sequencing but this is not obvious to me. Even in PBS, wouldn’t you have many builders compete to mine the next L1 block, and hence all repeat the effort of making the next L2 block as well?

I’m thinking specifically, all of them would do the compression work, and all of them would have to compute the validity proofs. (Am I missing any other work?) You said that the latter can be avoided if the L1 builder can validity-prove the previous block, but how does that change anything? Then the competition for proving block n would simply go to block n+1, it wouldn’t disappear.

Thank you

10 Likes

We discussed a solution for this (multiple concurrent block proposers) in a previous post. Tldr you can have an additional validity condition for a block to be valid: requiring signed payloads with additional txs from >2/3 of the other validators for the block to be accepted. @sreeramkannan also made a post about it the other day with a good summary on how this could help based rollup sequencing.

3 Likes

Imho that’s not a big deal, it’s the same as builders/searchers today doing overlapping work. The bigger issue would arise if there was complete anarchy in what actually ends up on chain, e.g. if you have a “naive” proposer (not connected to some builder network) and if these L2 bundles are all sent over the L1 mempool as normal txs, because it would lead to wasted gas and wasted blockspace.

One approach to remove this redundancy in proof computation could be to reinstate the “centralized sequencing + open inbox” model, but for validity proof submission (instead of actual sequencing). Basically, there’s a centralized, whitelisted prover (or even better, the role is auctioned periodically), which is the only one allowed to submit proofs for some time (e.g. it is the only one which can submit a proof for a sequence of batches which was fully included by block n before block n + k), but after that time anyone can submit, ensuring liveness. The reason why this is better than the same model for sequencing is that there’s nothing bad which the whitelisted prover can do with their own power, other than delaying the on-chain finality of some L2 batches. Perhaps one could even allow anyone to force through a proof, they just wouldn’t get compensated for it?

4 Likes

Awesome post @JustinDrake! Really great summary of the pros and cons of based vs non-based sequencing for rollups. Also love the term “based”, I have just been calling this native vs non-native sequencing, and native is an overloaded term :smiley:.

One question on the social consensus point quoted above – what precisely is the issue that you are referring to? For example, if a non-based sequencing solution is implemented with an external consensus protocol whose stake table (i.e., participation set) is managed in an L1 contract, why wouldn’t you say that social consensus among the L1 nodes could be used to recover from failures?

Or at least to the same extent that it can be used to recover from failures with based sequencing? (I am not sure to what extent we can rely on social consensus in the first place, as it is somewhat of a magic hammer to circumvent impossibilities in consensus, recovering liveness despite the conditions that made it impossible in the given model, whether due to the network or corruption thresholds exceeded).

Is the assumption that L1 validators do not have the incentive to do any form of social recovery for non-based sequencing because they aren’t running it and aren’t deriving sufficient benefit from it? Would you say this changes with re-staking that protocols like Eigenlayer are enabling? Because with re-staking the L1 validators do have the option to participate and get more exposure to the value generated by the rollup? If so, would you say this still remains true when some form of dual-staking is used, that allows both for participation of Ethereum nodes (L1 stakers) and other nodes that are directly staking (some other token) for the non-based sequencing protocol? If not, then where would you say the threshold is crossed, going from pure based sequencing, to sequencing exclusively run by L1 re-stakers, to some hybrid dual-staking, that makes social consensus no-longer viable?

4 Likes

Why do you believe that the rollup landscape is plausibly ‘winner-take-most’?

1 Like

Disc-V5 supports topics and is used on both CL and EL, so we could create separate p2p channels at any granularity we want (e.g. “shared_sequenced_chain1_chain2_slot5”, see e.g. how EIP-4844 blobs are each on separate topics.

8 Likes

I’m going to take the opposite view here and say that the advantages of based rollups are exaggerated. I’ll go one by one:

  • liveness: If there’s a liveness failure, rollups can recover by having a new set of sequencers (or even just one) be chosen on L1. There’s no reason to stop the chain and force people to do a mass exit. We can always find new people to continue building blocks.
  • decentralisation: We can still use the L1 searcher-builder-proposer infrastructure to include rollup blocks on L1. We create the L2 blocks using L2 sequencers and then allow anyone to submit it to L1 (in exchange for a small reward of course). L1 builders will take that MEV opportunity and include the L2 blocks in their L1 blocks.
  • simplicity: True, it’s indeed simpler.
  • cost: True, but verifying signatures is not that expensive. And can be done optimistically, resulting in very low overhead. Regarding tokens, they are a regulatory burden, but are necessary to fund development.
  • L1 economic alignment: Indeed, MEV would flow to L1. But that’s more of an advantage to L1, not really to the rollup.
  • sovereignty: Evidently, non-based rollups can do all of this too. And more.

Regarding the disadvantages, latency is a big one. With L2 sequencers you can have L2 block finality in 1s which, while much weaker than L1 block finality for sure, still provides a decent guarantee for most use cases. And as soon as a valid L2 block is included in L1, it’s is in effect final. There’s no way that that L2 block will be reverted. The proof can be submitted later, it doesn’t matter.
But with based rollups, we always need to wait for the L2 block to be proved. Each block will include the SNARK proof for the previous block, so no new block can be created until the proof is generated. With the current technology that takes several minutes. Until that, there’s no guarantee that a user transaction will complete.

So, based rollups are indeed simpler and send their MEV to L1, but have much higher latency.

But with based rollups, we always need to wait for the L2 block to be proved .

Why? Isn’t the definition of finalized that the transaction cannot be reverted? When you post a rollup block on-chain, you cannot revert it since you can’t make an invalid proof of the rollup block.

1 Like

It has to do with if you allow several block proposals (by this I mean L2 blocks that are included in L1 but not proven yet) at the same height or if you lock the contract.
Assume that you allow people to keep sending block proposals until one of them is proven. In that case, you clearly cannot be sure whether a particular block proposal will be finalized because you might have several valid block proposals and any of them could get proven.
So, that doesn’t work. The other option is to “lock” the contract. When someone sends a block proposal, you optimistically assume that it’s correct and don’t accept any other block proposals at that height. You cannot prove that a given block is invalid, so you need to have a timeout. If the block proposal doesn’t get proven within that timeout, then you revert the block proposal and let other people propose a block for that height.
This is where the problems start. You cannot let anyone send block proposals, otherwise a malicious actor can just continuously send invalid block proposals. That’s a liveness risk. You can try to fix it by having smaller timeouts, but then a valid block proposal might get reverted if it doesn’t get proven in time. So, you need a timeout that is long enough and you need some type of bond for the block proposals (that you can slash if the block is invalid).
At this point we are starting to deviate from a based rollup. The other problem is that the value of the bond needs to be well-priced. If it’s too low you open yourself to liveness attacks again, if it’s too high no one will send block proposals (there’s a capital cost to having ETH locked, sending block proposals needs to be competitive with other activities like staking or LPing). The natural solution is to let the market decide, so you decide on some number of available slots to be a block proposer and auction them off.
Now we are way off the original based rollup design, this is almost PoS. It’s certainly not as simple as a based rollup. It’s in fact similar to another rollup design where we auction slots to be block proposer, and then let them propose blocks round robin. The only difference really is that in our based rollup variant every block proposer competes to send the next block proposal, which will make the MEV flow to L1. While if we let the proposers go round robin the MEV stays in L2. Evidently rollups are going to prefer to keep the MEV since that means higher rewards for the block proposers which means higher bonds/stakes which means higher security. So we end up with L2 sequencing.
And all of this only gets you to a 12s latency. With full blown PoS you can get down to 1s.

1 Like

Thanks for the reply.

Yeah, I forgot that zk-rollups don’t need to include enough data to be able to make a fraud proof. I guess you could include all transaction data with the rollup block, like in optimistic rollups, and then you avoid the complexities with timeouts, but then you lose some of the efficiency of zk-rollups.

True, I was thinking about zk-rollups since the original post is about zk-rollups. But for optimistic rollups I think the situation would be similar. You probably still need some bond otherwise it would be too cheap to attack the chain. And then the rest follows from there.

The most naive tx supply chain isn’t through an L1 mempool IMO, it’s still through sequencers (but permissionless), which may or may not spin up an L2 mempool.

Something like, to become a sequencer you create a sequencer contract on L1:

if (rollupContract.lastTouchedBlock < targetBlock) {
  rollupContract.postBlob(blob);
  block.coinbase.transfer(topOfBlockFee);
} else {
  rollupContract.postBlob(blob);
  block.coinbase.transfer(midOfBlockFee);
}

and L2 txns can similarly define L2fee(timesIncludedInBlockBefore, timesIncludedInBlockTotal) for censorship resistance. Then sequencers just submit Flashbots bundles.

This could potentially be better than going through the L1 crList alone, which I assume would be the only way in the “at most one rollup block per L1 block” approach, which I assume does not apply to crLists, i.e. crLists are allowed to add nth rollup blocks per L1 block.

Of course, I’m not sure if anyone would spin up an alternative sequencer just to batch otherwise-censored transactions. Maybe in practice you’d have to go through the L1 crList with a singleton batch regardless.

Yes, “Each block will include the SNARK proof for the previous block” should be removed (not sure what benefit it would have anyway). Doing it fully async would be better probably.

Since it takes time for the L1 chain for finalize, the time between rollup blocks will need to be more than L1 finalization time. This is because to build a valid block one needs to know the previous block, which needs to be final if one wants operator software to be simple. Otherwise, the operator will need to keep making multiple proposals corresponding to non-finalized branch forks

So lets say, if you wait 5 min to finalize between blocks, and you submit a 100KB block to the mainnet each five minutes, and each transaction is 100 bytes, you get 4 TPS.

If you do not wait for blocks to finalize, then there is still network propagation time.

So if operator2 gets the previous block from operator1 one minute after operator 1 creates it,
then in the example above you get 20 TPS

I think this shows that if someone wants to get a really fast decentralized rollup, the sequences needs to be a fast finalizing PoS chain

If the operators run a fast finalizing PoS chain to order transactions, then blocks on the chain can be finalized in seconds

1 Like

The L2 contract can enforce this imo.