FRI as erasure code fraud proof

Are you sure that’s needed? If we do Fiat-Shamir only on f^{(0)}, then throughout the commit phase the prover can predict where an interpolated polynomial will be evaluated, but they can’t predict which chunk of 2^η points will be interpolated. So it doesn’t seem obvious to me that soundness would break down. Wouldn’t each invalid “coset reduction” (from 2^η points in f^{(i)} to a point in f^{(i + 1)}) still have a certain probability of being detected?

To cheat a FRI that is 4x expanded you start on f^{(0)} with 25% of the points on the right polynomial and 75% on the wrong polynomial (worst case). For the 75% that are wrong you can cheat half on f^{(1)} such that all checks on f^{(2)} and higher will be fine. So now instead of having a success rate of 25% we have a success rate of 62.5%. If you cheat on multiple levels you can get this even higher. This is only possible because the column is known in advance, otherwise you would not know how to cheat on f^{(1)}.

We could use the below optimizations to lower the size of the sampling of the light clients and the fraud proof. This will result in a light-client sampling size of 116kB with 30 bits of security and a 272kB fraud proof with 128 bits of security.


  • Instead of an expansion rate of 4, we use an expansion rate of 16.
  • We use 30 bits security on the sampling and 128 bits for the FRI proof.
  • Instead of sending the middle layer of the expanded merkle tree, we send the middle layer of the original merkle tree of the block data. This is possible because it is always possible to use a subset from the actual block data for the fraud proof. For 32 MB of data this middle layer has a size of 32 kB.
  • We can stop the FRI early by sending a partial layer of the FRI. When we are for example on a layer where there are 256 values that have to be on a 16 degree polynomial, we can just send the first 16 values and expand these to 512 values. So we don’t need any merkle branches.
  • Instead of using only cubic interpolation in a FRI we can vary the degree depending on the remaining dept of the FRI. Depending on the number of samples of the FRI there is an optimal degree for every layer. So the optimal degrees for the client sampling will be different from the optimal degrees for the fraud proof.
  • For the fraud proof we can add extra security by adding 20 bits proof of work. So before the samples are chosen, the FRI builder has to find a nonce such that the hash of the FRI roots and the nonce starts with 20 zero bits. This hash is used as a seed during the sampling.

Light clients
The light clients have to download the 32 kB middle layer, and 15 FRI traces of the following form:
Depth 24: degree 15 (16+20=36 chunks)
Depth 20: degree 7 (8+17=25 chunks)
Depth 17: degree 7 (8+14=22 chunks)
Depth 14: degree 7 (8+11=19 chunks)
Depth 11: 128 values of the polynomial (128 chunks)
For 8 samples the light client has to download (36+35+22+19)*15+128 chunks = 84 kB, together with the middle layer this will be 116 kB.
When we assume a fraud proof can be build when 25% of the samples have been collected, the security is 30 bits.

Fraud proof
For the fraud proof we need 54 FRI traces of the following form:
Depth 24: degree 15 (16+20=36 chunks)
Depth 20: degree 15 (16+16=32 chunks)
Depth 16: degree 7 (8+13=21 chunks)
Depth 13: 512 values of the polynomial (512 chunks)
We need to publish the 32kB of the block data that is invalid. This data can be rolled up to 4 values of depth 16. To validate these values we need one branch of 21-4=17 chunks.
We also need merkle branches showing the indexes that are sampled, but have not been collected. On average this will be 54*3=162 branches of 21 chunks is 3402 chunks. By publishing the level with 256 values of this merkle tree instead of the root, this can be reduced to 2352 chunks.
This will total a fraud proof of 272kB.
With 54 FRI samples at a 4x expansion, this results in 108 bits of security. Adding 20 bits of PoW, results in 128 bits of security.

1 Like

The problem is that the fraud proof might be huge. Imagine one complete quarter of the data missing up to a second-layer Merkle root, you would have to provide that whole chunk of data!

That sounds like a cool technique, is there a description of this with security proof?
Isn’t there a problem that now the prover can commit to some data and then get lots of different proofs of work on it to manipulate the samples?

The problem here is that the 30 samples only guarantee a very low degree of proximity. So 25% is not the right stopping rate, we will need a much higher number of samples so that the fraud prover has enough good samples to construct a FRI with.

There is indeed a problem when block data is available and the merkle tree for the block data is correct, but there exists a withheld merkle branch from the extension part that is not on the polynomial. In that case, it is not possible to construct a fraud proof (because it is to large) and we can not accept this block as valid, because using the withheld merkle branch, it would be possible to construct a fraud proof that could rollback the block.
I see three ways to tackle this problem:

  • Just add the full middle layer, so we can build a fraud proof. (480kB extra for the light clients)
  • Only allow merkle tree branches that are valid for the data availability FRI to be used in the fraud proof.
    The FRI traces of the data availability FRI have to be included in the fraud proof. Now the withheld merkle branch can not be used in a proof, because it is invalid for the data availability FRI. Because no fraud proof is possible we can accept the block. Downside is that some light-clients might see the block as unavailable. (115kB extra for 54 traces in the proof)
  • Do an extra data availability sampling round on the full middle layer of 512kB. I estimate this to be 35kB for the light-client, but we no longer need to download the 32kB partial middle layer. Fraud proofs for this new data availability sampling round will be a little smaller than for the full data availability round, because the merkle trees are shorter. (3kB extra for the light client)

Option 3 seems preferable.

I have not found a formal description of it, first saw it in this post. Without PoW it is also possible to try multiple versions of the FRI to get a new sampling by changing one value on one level (not trivial, but doable). Besides if one FRI sample has a chance of 2^{-x}, you will need on average 2^{x+20} hashes to crack the FRI.

When 25% of the samples are available, only a small fraction (2^{-30}) of the light clients will be successful in collecting 15 samples (with infinite light-clients).
So there will be at least 25% of the samples available for the proof. These samples are all on the same polynomial, because they passed the data availability FRI. The builder only needs to proof it has more than 6.25% of the samples. So the 25% will be sufficient.