@vbuterin thanks for the code; finally had a chance to look at it and think about how it compares with pollarding.

Given that the chance of removing any given hash is related to the ratio of proofs to data values (plus level in the tree, of course) it’s no surprise the more proofs there are the better the overall reduction. At relatively low levels (1:256, or 64:16384 as was actually tested) there is little difference and depending on the encoding method for the data it can be less efficient than pollarding (as the empty placeholders still take up some space). But stepping this up to higher ratios brings in higher efficiencies, for example at 1:32 (512:16384) it is about 30% more efficient than pollarding (which is itself 60% more efficient than simple proofs). A nice increase for many real-world cases.

An interesting difference is that both generation and verification of proofs require all of the individual proofs to be present; with the former this maximises the gain and with the latter it guarantees all values are in place to enable verification. This makes pollarding potentially better for interactive proofs where the pollard can be sent once and subsequent proofs can refer to it.

The ultimate solution would be to have a progressive interactive version of your system where both sides remember the proofs that have been sent to date and so over time proofs sent can be more and more efficient as they require fewer new hashes to verify. Probably not something that would be a common requirement, however (and it comes with its own trade-offs).

If you don’t mind I’d like to write up the solution (thinking of calling it “spare tree proofs” unless I can come up with something catchier) as a companion piece to the one I wrote on pollarding.

I agree with @daria that Reverse MiMC sounds problematic: As an example, if you specifically want the VDF for collaborative randomness, then an adversary could use the faster reverse computation. You could easily fix this with another hash function anywhere in the process, but if you want VDF proofs to be owned to reward the evaluators then the fast reverse evaluation still breaks that, and you cannot fix that so easily.

Following the values that are sampled at the base level (as I described at the end of my previous post), instead of re-sampling at every step of the recursion, could unlock some extra optimizations to shrink the proof further:

Every merkle tree is sampled only once and we always collect four values. So we could put these four values together in the leaf of the tree. This would lower the depth of the tree by two levels and save some hashes.

Increasing the degree of the polynomial on the rows from 4 to 8 or even 16 does not come at a high cost and could save a level of recursion. The number of values per level will increase, but the merkle part would shrink. Especially for levels with deep trees the trade offs will be favorable.

At the final level of the recursion, we now publish all 256 values of the column to be able to verify the values of the previous level and check whether these values are on the same low degree (max 16) polynomial.
I don’t think we need to publish these 256 values at all. The verifier could just calculate the 40 values that should be in that column based on the values from the previous level and check whether these 40 values are on a polynomial with a max degree of 16.
Because we are sampling 40 values we might be able to stop earlier and check for a polynomial of degree 32.
For this to work we need to make sure that we will end up with enough distinct values at the last level of the recursion, so we have to adjust the sampling process a little.

Increasing the number of values at the base layer and lowering the number of samples, could shrink the size of the proof while keeping the same level of security. This is off course more computational intensive, but could have a positive trade off.
Note that this is not possible for the original version, because the attack described in my previous post would be easier, so it would lower the security of the proof.

@vbuterin - quick question: in your tutorial (STARKs, Part 3: Into the Weeds), to reduce the size of FRI proofs, you merge P(x), B(x), and D(x) polynomials into a single polynomial using random linear combination of P, P^{steps}, B, B^{steps}, and D like this:

LC = k_1 * P + k_2 * P^{steps} + k_3 * B + k_4 * B^{steps} + D

Is there a reason we can’t skip P and B parts and compute the combination like this?:

Ah yes, this is a very subtle point. P and B are degree < n polynomials, and D is a degree < 2n polynomial. Hence for a degree < 2n check to properly check the degree of both polynomials, we need to multiply P and B by x^n. However, if we do just that, then a value like P(x) = \frac{1}{x^n} would also pass, which is not what we want, and so we need to do a linear combination P'(x) = P(x) * k1 + P(x) * x^n * k2, which does successfully ensure that if deg(P) \ge n then deg(P') \ge 2n.

A special piece of hardware can be created that can quite efficiently compute the inverse cube root problem. This will then give time to fulfill the possibilities of many more hacking options.
I will give an example, there is an algorithm RandomX that fights with the ability to create special hardware, the mimc approach where the cube function is used does not fight the problem of creating special hardware and does not oppose it in any way.

Very nice tutorial! I have been playing around the tutorial, and I found it is very easy to understand with minimum but useful application. I am thinking one further application is to use MIMC and STARK to prove the storage of a large dataset, that is dynamic (rather than static in Filecoin setup).

A few questions:

If we want to use MIMC on encoding data in \mathbb{F}_q^k, k > 1 in similar sequential way, how could we extend current k = 1 version?

Do you know any progress on any GPU implementation of MIMC that is optimized for throughput?