Data availability proof-friendly state tree transitions

But isn’t that just making a Merkle tree on top of a Merkle tree, which is basically the same thing as having a bigger Merkle tree?

If the goal is to be able to do erasure coding checks on rows and columns, then as I say in the post it can work, but it requires Merkle roots for both rows and columns, and a fraud proof mechanism for capturing inconsistencies between the two.

I think, instead of using a 2-d erasure code, you can just split a file into root(n) chunks and erasure code each chunk. There is no real need to have a 2-d erasure coding for columns and rows, and capture consistency between them.

Yes but the point is that the number of Merkle roots a light client needs for rows and columns is 2\sqrt{n} for n transactions, because the length of a row (the side of a square) is \sqrt{n}. Hence O(\sqrt{n}) complexity.

I don’t follow what you mean about a Merkle tree on top of a Merkle tree. Each intermediate state tree is a single tree, that is constructed based on the previous intermediate state tree and the next transactions in the block, in order. Hence creating “snapshots” of the tree that can be used as a proof of execution of the state tree transitions being applied correctly, allowing full nodes to generate fault proofs.

I don’t think that results in the same properties, because instead of randomly sampling chunks from the whole square, you’d need to sample some chunks from every root(n) piece, to have some assurance that the data behind every root(n) piece is available.

Even if you randomly sample chunks from the whole square, to generate a succinct fraud proof from a 2d erasure code, you need root(n) values from a row or column. So if you want to guarantee succinct fraud proofs, you still need to check the availability of each row and column.

Generating a fraud proof and probabilistically checking the availability of the square are two different things though.

If you don’t have rows and columns, and only root(n) pieces, all you’re really doing is splitting up the block into multiple microblocks with different erasure codes. If you want to hide any data from the entire block, instead of needing to hide 50% of the block, you only need to hide a few % of the block (i.e. half a microblock), so you need to randomly sample a lot more chunks.

Hmm, I’m not so sure.

Generating the fraud proof is one side of the coin. Checking availability is the other. the point being you can’t reduce the fraud proof size without increasing the amount of checking.

Two cases:
(Worst case analysis, assuming an adversary will try to force the largest fraud proofs)
(1) You check all O(root(n)) merkle roots for 2-D erasure code, and only then will the code offers the same guarantees as having a merkle root for each microblock. => root(n) size fraud proofs.
(2) You don’t check all roots, and then a 2-D code has the same guarantees on fraud proof size that checking just a single merkle root for the whole block does. => O(n) sized fraud proofs, and you may as well just use a 1-D code for the block.

You don’t have to randomly sample from all O(root(n)) rows and columns in a 2D erasure code though, you randomly sample from the entire square, and you can miss rows and columns while randomly sampling. According to my rough calculations, if you want to hide a single chunk in the square, you’d have to hide about 25% of the square (draw it out to see what I mean). With microblocks, you only have to hide half a microblock (a few % of the entire block).

For clarity, could you let me know what value you are calculating for the i,j th entry of your 2-d erasure code? what you suggest seems to be contrary to the docs:

In such a model, we add some further complexity to the data structures involved. First of all, instead of having a single Merkle root, we now have 4 * sqrt(M) Merkle roots, one for each row and one for each column. Light clients would need to download all of this data as part of a light client proof…

A major benefit of this is that the size of a fraud proof is much lower: a fraud proof now consists of M values in a single row or column plus Merkle proofs for these values

See the “Going Multidimensional: The Self-Healing Cube” section in https://blog.ethereum.org/2014/08/16/secret-sharing-erasure-coding-guide-aspiring-dropbox-decentralizer/.

The fraud proofs consist of a single row or column, but that doesn’t mean you have to sample from every single one. Even if an entire single row is missing, you can recompute it from the columns.

Thanks for sharing. That is very interesting but you’d still need root(n) columns to reconstruct the row, so wouldn’t the fraud proof still have O(n) size?

Might be possible with crypto-economics- i.e. I claim that the values in m=root(n) columns are x_1...x_m, which forces a row to be some value, s. Then the erasure code creator can challenge any one of these claims, say x_1 wlg. In the event of a challenge, I have to then produce the signed merkle root for values y_1 ...y_m producing x_1?

This process can go on and on until termination. The question is how many rounds would be needed on average.

Good question, if an entire row is missing, I’m not sure how you’d construct a fraud proof to show that a specific row is inconsistent with its merkle root, without having to show all of the columns in the fraud proof so the light client can reconstruct that row for themselves.

Should be possible crypto-economically, but would like to look into it more in depth.

Oh actually, it’s quite simple. If an entire row is missing, the fraud proof consists of M merkle proofs of the missing chunks of the rows recomputed from the columns, from the column merkle roots. The light client then computes the whole row from these column merkle proofs, and checks to see if it matches the given merkle root of the row. There’s quite an overhead for having to provide a merkle proof for every chunk, though.

You could also make it so that when a client randomly samples an (x, y) chunk in a square, they must receive a merkle proof from both the column and row roots, which makes inconsistencies between rows and columns more difficult to get away with.

Yeah, you can do it that way, nice. (y)

Has anyone been able to make an implementation yet for the Sparse Merkle Tree?

I’ve implemented a Sparse Merkle tree library in Go. It also allows for verifying the “Merkle change proofs” I discussed in the post, which are in fact just standard Merkle proofs (i.e. stateless client witnesses). It allows a light client to verify, using the sub-tree of the state, the new root of the tree.

The API works by first creating a new empty subtree, adding proofs to it, updating leafs, and calculating the new root. Example usage:

package main

import(
    "bytes"
    "fmt"
    "crypto/sha256"
    "github.com/musalbas/smt"
)

func main() {
    // Initialise the tree
    tree := smt.NewSparseMerkleTree(smt.NewSimpleMap(), sha256.New())

    // Set some keys
    tree.Update([]byte("key1"), []byte("value1"))
    tree.Update([]byte("key2"), []byte("value2"))
    tree.Update([]byte("key3"), []byte("value3"))
    tree.Update([]byte("key4"), []byte("value4"))

    // Generate a Merkle proof for key1=value1 and key2=value2
    proof1, _ := tree.Prove([]byte("key1"))
    proof2, _ := tree.Prove([]byte("key2"))

    // Initialise the deep subtree
    subtree := smt.NewDeepSparseMerkleSubTree(smt.NewSimpleMap(), sha256.New())

    // Add the branches to the subtree
    subtree.AddBranches(proof1, []byte("key1"), []byte("value1"), true)
    subtree.AddBranches(proof2, []byte("key2"), []byte("value2"), true)

    // Update key1 and key2 in both trees
    subtree.Update([]byte("key1"), []byte("value5"))
    tree.Update([]byte("key1"), []byte("value5"))
    subtree.Update([]byte("key2"), []byte("value6"))
    tree.Update([]byte("key2"), []byte("value6"))

    // Verify that both roots are equal
    if bytes.Compare(tree.Root(), subtree.Root()) == 0 {
        fmt.Println("Success")
    } else {
        fmt.Println("Failure")
    }
}
1 Like

I’m curious, what are the advantages/disadvantages of sparse Merkle Trees as compared to Merklix Trees (described here: https://www.deadalnix.me/2016/09/24/introducing-merklix-tree-as-an-unordered-merkle-tree-on-steroid and here: https://www.deadalnix.me/2016/09/29/using-merklix-tree-to-checkpoint-an-utxo-set ). It seems like at least in concept they try to accomplish the same thing.

Sparse Merkle trees are incredibly simple.

2 Likes

I am a bit confused, I thought when non-mining nodes receive a block from the miner they re-execute smart contracts and verify the new state tree root. If a miner creates a block with a fraudulent state root, wouldnt it be rejected by other nodes? How can an invalid state root end up at the light client unless the light client is connected to the mining node itself?

1 Like

The invalid state root can end up at the light client if any full node that it’s connected to feeds it the invalid block, because light clients just assume that the chain with the most work is valid. Under normal circumstances this should not happen, but in the rare case that the majority of the consensus is malicious then this can happen. Or even if people decided to fork the chain for sincere reasons e.g. due to disagreements, then light clients wouldn’t be able to distinguish between the forks if the consensus algorithms are the same.

1 Like