I’m afraid this won’t cut it; or I don’t understand the purpose of including this.
A JSON schema is sufficient to describe a JSON object, which contains a collection of unordered key-value pairs. This schema could provide type/size annotations for the fields in raw_data
(the consensus-layer serialised object nested within the P2P-layer serialised object), but not their positions (or potentially even size, if dynamic-length fields are permitted).
If the order of fields in raw_data
was “known by way of external agreement”, then there is little need for a JSON schema, as the same “external agreement” could be used to agree on just about anything else.
If instead the order of keys in schema
was made to match the order of fields in raw_data
, then reading a field from raw_data
would require having deserialised and parsed the schema
first, in its entirety. This is not the same as having to deserialise the entire raw_data
, but is a step back in that direction. Also, it’s kind of an “extension” to JSON schemas, not necessarily available in all JSON-handling libraries…
This thread seems to have shifted from discussing P2P message serialisation to discussing both P2P and consensus object serialisations.
Personally, I see no problem using different serialisation standards for the two (the latter object nested in the former), if the infra-simple NIH one is reserved for mostly-homogenous consensus-layer objects.
Cap’n Proto looks promising for P2P-layer messages, as its schema provides both position and size information (for an object’s fields, within the serialised representation). This should make it relatively simple to use mmap
in order to improve sync times, which I’ve seen work wonders in libbitcoin
many years ago (gist: makes the network ↔ memory ↔ drive pipe nearly-transparent (with a few caveats)).
It’s probably not the best choice for consensus-layer objects, as the C’nP’s built-in types do not include some commonly-used in Ethereum (160 bits or 256 bits), so “some assembly required”.