I just had the chance to read @gluk64’s proposal and see that we were not on the same page. Now I get it.
As far as I understand, a PGP word list like scheme would protect against
- transposition of two consecutive words,
- duplicate words,
- or omitted words.
- transposition of any two words
- accidentally typing another word from the list (which may very well happen. Look at BIP39: trip-trim, aim-air, awake-aware, etc.)
AFAIU the first 1024 words would be for even and last 1024 would be for odd. The bad thing is that we lose the two-three syllable pattern which lets the listener know immediately something is wrong. If we wanted to build such a list, we could have a hard time finding words easy to spell and pronounce for laymen. (Also, I still think we should use separate word lists for public and private stuff, which would make the task even harder.)
What I was suggesting: Don’t partition the bits like @gluk64’s proposal, and treat the whole thing as a number. If we use a check word at the end, we get away with 15 words + 1 check word.
I propose something like ISBN-10. 2053 is nearest prime to 2048, we could add 5 more words to the list to be used only in the check word. Then we would have a scalable error-detecting scheme that not only works for 20-byte addresses, but for numbers as big as 2048^2053. It would detect 100% of transposition and mistyping errors.
Slight disadvantage: implementations will have to use big integer libraries (if integers not bigint by default like in Python)
Do let me know which one you think is better and why.