Thank you for the post! It’s actually a very close design to what “eth2 phase 2” was proposing with sharding: Sample validator committees and assign them to shards which they validate (i.e., run the state transition function and attest to validity if everything checks out). There too it would have made sense to rely on as much statelessness as possible, given that validators were expected to rotate (infrequently) between shards. It’s also pretty close to the idea of collators in Polkadot, I think.
There are differences of course, in your design, it looks more like assigning validators as decentralised sequencers of native rollups (in the sense of rollups using the EXECUTE
opcode as described here). To some extent, getting validators involved with construction of L2 blocks also has a flavour of based rollups, but here you propose to go further and involve them e.g., in FOCIL committees too.
I still don’t think it’s an either/or when it comes to keeping throughput at local building limits. In your design, it’s still critical to use blobs to make the L2 data available. These blobs are provided by the L1 validator set as a whole, in that all are expected to participate in the dissemination of the blob, albeit maybe not in its sampling if they are not attesters for that slot. So it may still be desirable to push blob throughput beyond what some L1 validators as local builders can achieve. You also assume that L2s in your design are secured by zk proofs—these proofs need to be computed, and you may not want to be limited by the best proving capacity of the worst validator assigned to the L2.