Imho that’s not a big deal, it’s the same as builders/searchers today doing overlapping work. The bigger issue would arise if there was complete anarchy in what actually ends up on chain, e.g. if you have a “naive” proposer (not connected to some builder network) and if these L2 bundles are all sent over the L1 mempool as normal txs, because it would lead to wasted gas and wasted blockspace.
One approach to remove this redundancy in proof computation could be to reinstate the “centralized sequencing + open inbox” model, but for validity proof submission (instead of actual sequencing). Basically, there’s a centralized, whitelisted prover (or even better, the role is auctioned periodically), which is the only one allowed to submit proofs for some time (e.g. it is the only one which can submit a proof for a sequence of batches which was fully included by block n before block n + k), but after that time anyone can submit, ensuring liveness. The reason why this is better than the same model for sequencing is that there’s nothing bad which the whitelisted prover can do with their own power, other than delaying the on-chain finality of some L2 batches. Perhaps one could even allow anyone to force through a proof, they just wouldn’t get compensated for it?