Do you mean δ here? Somehow C seems to have changed from a measure of capacity to a measure of time, and I didn’t quite follow.
My bad, fixed.
Regarding availability, I’m very strongly pro liveness for these reasons:
- In an extreme scenario where >1/3 of nodes go offline for an extended duration, chances are there’s something wrong with the social layer too, and so it’s ideal for the chain to be able to go through an inactivity leak and then kick back into full finalization mode all on its own.
- If there is a 51% censorship attack, then it needs to be maximally easy for a minority to counterattack and make a minority soft fork, without having to coordinate too much. Being able to start an alternate chain with only a few people and then gather more and more momentum over time is much socially easier than having to hard fork.
- If the chain splits because of a client bug, then historically it becomes clear which chain is correct long before all clients are updated and most or all nodes are once again following that chain. In such a split, allowing both chains to keep running allows people to individually immediately switch to the correct chain much faster than the whole network can do so, allowing many services to resume very quickly.
Most fundamentally, I start from the principle that it’s better to give people as much information about the future state of the chain as possible. This is the reason why shorter finality times are good, why finality is better than only having probabilistic fork choice, etc. From that same principle, if finality guarantees stop, it’s better to give people some information about the future state of the chain (via an available chain) than no information at all (via stalling).
In an ideal system, if the chain splits because of a bug, then it would even be able to finalize the idea of “either A or B” first, and then finalize one of the two later. This would give people whose transactions were included in both chains a strong assurance that their transaction would not be reverted. For sequencer-driven L2s, if the sequencer submits to both, then this effectively means that the L2 finalizes (with full security) even while the underlying L1 is going through a chain split. In fact, simple LMD GHOST already gives people synchronous confirmation of “A or B” in such a case: if A and B are both getting a large share of votes, it’s basically impossible for some other chain C to come in and become the head, unless suddenly almost all validators decide to change their view.
Also, more philosophically, I think we should be clear that Ethereum is striving to be “bitcoin-like”, and NOT “high-throughput-fast-chain-like”. The latter type of chain, of which there are many, will likely gravitate toward standard BFT with heavy implicit reliance on social consensus, because if you’re not prioritizing decentralization that approach is optimal. But if we are prioritizing decentralization, then we can’t count on a clean social layer that can easily make rapid decisions that everyone will agree on (as if such a gadget existed, it itself would be an unacceptable centralization risk), and so we need the chain to be able to proceed in ways that are social-layer-minimized even in extreme cases. Zcash has already taken the tradeoff of being more reliant on a powerful social consensus layer (eg. to determine recipients of the 20% dev issuance share), so it’s a very different system in that regard.