You’re very right @kladkogex.
Generally any effective largescale governance needs to have certain mechanism to allow for smallgroup decisions on behalf of the greater majority, that are guaranteed to be in good correlation with it.
The first mechanism that I’m aware of is analogous to offchain computations (where agents can stake tokens against the outcome of a certain proposal), on which I will expand in the 2nd coming blog post.
The second way that I’m aware of is indeed the one you mention, which is analogous to dynamic sharding, where random sets are chosen and supermajority is required accordingly, just as you describe.
However, let me point to two weaknesses of the second approach (and thus my current focus on the first, although I believe eventually we might have both in conjunction):

As mentioned above, randomness is subtle, and, while I’m not claiming it’s unsolvable, I would at least say that randomness here is critical, and is not a trivial issue (although perhaps solvable, as argued).

More importantly, note that this second approach relies on proposalagnostic statistics, which is problematic. Let me try to explain:
If there’s a certain fixed probability to “attack the system” (= succeed in passing a proposal that is not in correlation with the greatermajority will), and there’s a certain fixed price for submitting a proposal, then I can easily submit enough proposals that are benefitting enough (i.e. enough money sent to me) to make it profitable / attackable.
The point is that in a fully decentralized governance system you cannot allow for a “small probability” to make “very large mistakes”. You may be ok with a “small probability” for “small mistakes”. The problem/subtlety is to programmatically weigh the “size of a mistake”. In terms of transactions of tokens it’s perhaps easier, but what if the contracts can do other things, such as assigning reputation (what is “small”? depending on some factors), or changing the protocol (this is potentially definitely not small), etc.
Not unsolvable, but just pointing out the subtlety.
The advantage of the first method (to be expanded over next time) is that you use cryptoeconomics to bound mistakes. In other words, whenever there exists a potential for a mistake / attack to take place, there is a clear and well defined potential to make profit for whoever identifies the mistake. That guarantees a marketlike, dynamic resistance to attacks (so that people weigh the criticality of mistakes rather than programs).
But really good point made above, and great discussion.
Thanks