This seems not too different from our general sharding strategy. Split the state and transactions into shards, and use random shuffling, with the ability to beat slowly adaptive adversaries but not super-quickly-adaptive adversaries, to assign validators to shards. The significant differences that I see are:
- OmniLedger uses a VRF-based scheme to generate random numbers, whereas the 1.0 sharding scheme uses PoW blocks, and the 2.0 scheme will likely use a RANDAO-based scheme, possibly with majority functions on top.
- OmniLedger reconfigures shards every day, we reconfigure contiguously (the stateless client model allows us to do this). This also means that we do not need to worry about maintaining operability during transitions.
- OmniLedger uses a BFT protocol to achieve consensus within the shards; we use a block-based (essentially PPcoin-like) PoS.
Though those are small details; the fundamental core is basically the same (which makes sense, as there basically are only two ways to shard securely that we know about - random sampling, and fraud proofs/snarks/starks + data availability proofs).
As a sidenote, I dislike this focus on writing papers that try to describe complete systems. I feel like it would be much better if we focused separately on specific problems like improving cross-shard transaction capability, increasing efficiency of validator rotation, in-shard consensus algorithms, etc; “one big idea per paper, parametrize everything else” should be the norm IMO.