Open problem: ideal vector commitment

I think we can do better with RSA accumulators than O(n) in the case that we do not need the full set of witnesses after every update, but only a small subset of the witnesses. I guess a lot of applications fall in this category.
The trick is to split the values in \sqrt{n} groups and calculate a base witness per group. This is the part that all witnesses in the group have in common. For every update we only need to update these O(\sqrt{n}) base witnesses. To generate a witness for a set of values from a single group we need O(\sqrt{n}) time, because we can use the base witness as a starting point.

1 Like

@vbuterin I think we are missing a pretty fundamental requirement: I shouldn’t need linear O(K) data to update my witness or someone else should be able to calculate all such updates in less than O(k^2) time and give me the witness.

Otherwise, it seems if we scale Eth2 to say, all of Amazon EC2, then for me to update my toy ERC20 contract, I’d need to go through all of O(K) updates that happened on the entirety of EC2. Thats way too much data for my client to handle. In fact thats too much for most end users in Eth1 today. We’ve created validators that don’t need to store blocks, but at the cost of making end users see and process every block.

As far as I know, Merkle trees are the only thing that stops this, requiring roughly log(k) data to update a witness. This seems to scale well. But they fail requirement 3).

Is 3) a necessary requirement? For account based payments, Alice should be able to payBob (i.e. update his account) while only knowing the witness to her account and her balance. Merkle tree’s fail this requirement. But for a smart contract call from contract A to B, we need to know the state of B to execute the contract. So why do we care ?

I think that’s covered by the other requirements; you can just ping one of the nodes that has the “auxiliary information” (which by the other requirements is required to be efficiently updateable) and ask it for an updated proof. But yeah, the ability to maintain a smaller amount of auxiliary info to maintain a partial state and keep it updated in less time would be nice.

I actually think this should be taken seriously: The proof size for this “KZG verkle tree” construction is not sublinear, but still way smaller than for Merkle trees. All the KZG proofs can actually batched together into a single one; the only thing that needs to be transmitted is the intermediat KZG commitments that these need to be checked against. Proof size is therefore something like \log_k n - 1 group elements, where r is the verkle branching factor. In practice e.g. r=2^{16} and n=2^{30}, so we realistically only need one additional group element per proven position. That’s about 20x better than Merkle trees!

In terms of updating witnesses, an update does not touch all the commitments, so for each update only k \log_r n have to be updated. So using the KZG precomputation/updating techniques from this paper would allow efficient proofs and updates as per 5.(ii)

Even though they don’t fulfill all the criteria, I actually now feel that verkle trees are better than what we have considered so far. In addition, they provide another elegant construction for key-value stores in the form of “verkle tries”.

Thanks to @Pratyush for pointing out these properties on the twitter thread!

[Edit: I noticed I was using k to describe two different values. I now changed it so that k is the same as previously (number of reveals), and r is the verkle branching factor]

1 Like

What do you mean by “verkle branching factor” here? The number of KZG commitments? The branching factor of the top-level Merkle tree? I’m confused as to how such a small proof at the top is possible; seems to me like if there 2^{30} elements in 2^{16} trees, then the tree proof for k accounts would still be of size k * (16 - log(k)).

Sorry I think I might have confused this by using k to mean two different things – corrected above.

The proof for k accounts would be

  1. The KZG commitments of all the intermediate layers to be proven – that’s k (\log_r(n) - 1) group elements
  2. One single proof that all elements and intermediate commitments were decomitted correctly – this can be aggregated into a single KZG proof, so another group element.

I’m still confused by (1). Is there a Merkle tree at the top, or something else? If it’s a Merkle tree, where are the hashes? Or is it just a commitment to a set of values each of which are themselves commitments?

The whole thing is an r-ary Merkle tree, but instead of a hash if its children, each node is a KZG commitment of its children.

Ah, I see. So it functions like a Merkle tree, except instead of proofs having length (r-1) * log_r(n) it’s just log_r(n). Got it. Witness update costs would be on the order of r * log_r(n) though, correct?

Yes, that’s correct. But that’s already much less than O(n) for full KZG.

If n ~= 2**28 then maybe we can encode a compact vector commitment using prime number indices and an RSA accumulator.
Suppose there’s a function p(k) returning the k-th prime for k<2^{32} (e.g. the Meissel Lehmer Algorithm). Then we can encode a pair (key, value) into a single prime number:
pair = p( key * p(value + 2**28) ).
(This assumes the value is also in the ballpark \text{value} < 2^{28}).

We can update an accumulator as usually: A1 = A0^t where t = pair_1 * pair_2 * pair_3 * ... . Difference is that now the pairs are about 4 bytes instead of the 32 bytes when using the conventional approach of hashing into primes. That makes updating witnesses 8 times more efficient. The scheme also becomes more simple than regular RSA VC.

Further techniques regarding batching and efficient witness updates have been mentioned here by @ben-a-fisch .

How fast are the witness update techniques? Is there a way to adjust the construction so that you can update all the data needed to compute any witness after one update in log(n) time?

I do not understand how the pairing works. If both key and value are 28 bits, then key * p(value + 2^{28}) will be larger than 2^{32} so you can not apply the p-function on it again.

You’re right. I was sloppy with the bounds. That’s an error. You have to adjust it. Either you’re able to calculate p(k) for higher k or you have to reduce number of different values or keys. Nevertheless, the overall concept stays the same.

In the section “Distributed accumulator updates” the paper of Boneh et al. says about updating membership witnesses: “Using the RootFactor algorithm this time can be reduced to O(n log(n)) operations or amortized O(log(n)) operations per witness.”

Why are you using the p function twice? Wouldn’t pair=p(key+value*2^{28}) make more efficient use of the available primes?

How far can we stretch the p-function such that it still is practical? Because if we want to store hashes we need at least 120 bits.

Yes. Boneh et al. 's paper looks awesome.

I was also wondering why @vbuterin in Using polynomial commitments to replace state roots
picked polynomial commitments over vector commitment? Is there any trade-off?

BTW, I guess the “ideal” in the title means meeting the 5 properties you @vbuterin list, instead of using “ideal group/ring, etc.” (from Math perspectives)?

Yes, ideal is meant as an adjective :slight_smile:

Indeed, what we’re looking for is vector commitments – it just happens that polynomial commitments, at this stage, seem to make some of the best vector commitments we know of.

The polynomial commitment primitive does have the additional advantage that it would naturally expand to data availability roots.

We recently proposed Hyperproofs to address this open problem by Vitalik.

Hyperproofs can be regarded as an algebraic generalization of Merkle trees that use a hash function based on commitments to multivariate polynomials (see [PST13e]).

Benefits:

  1. Aggregating Hyperproofs is 10 to 100 times faster than SNARK-friendly Merkle trees via Poseidon or Pedersen hashes
  2. Individual Hyperproofs are as large as Merkle proofs
  3. Updating a tree of Hyperproofs is competitive with updating a Poseidon-hashed Merkle tree: 2.6 millisecs / update.
  4. Hyperproofs are homomorphic.

Room for improvement / future work:

  1. Aggregated Hyperproofs are a bit large: 52 KiB
  2. Verifying an aggregated proof is rather slow: 18 seconds
  3. Verifying an individual Hyperproof is slower than a Merkle proof: 11 millisecs
  4. Trusted setup with linear-sized public parameters

Key ideas:

  • Represent a vector \textbf{a} = [a_0,\dots, a_{n-1}], where n=2^\ell, as a multilinear-extension polynomial f(x_\ell, \dots, x_1) such that for all positions i with binary expansion i_\ell, i_{\ell-1},\dots, i_1 we have f(i_\ell, \dots, i_1) = a_i.
  • Commit to this polynomial using PST (multivariate) polynomial commitments: i.e., the generalization of KZG to multivariate polynomials
    • Specifically, the vector commitment is c=g^{f(s_\ell, \dots, s_1)}, where s_\ell, \dots, s_1 are unknown trapdoors. (See paper for details on full public parameters.)
  • The proof for a_i consists of \ell PST commitments to quotient polynomials q_\ell,\dots q_1 such that f = a_i + \sum_{k = 1,\dots,\ell} q_k \cdot (x_k - i_k)
    • q_\ell is computed by dividing f by x_\ell - i_\ell
    • q_{\ell-1} is computed by dividing r_\ell by x_{\ell-1} - i_{\ell-1}, where r_\ell is the remainder from the division above
    • …and so on
    • q_1 is computed by dividing r_2 by x_1-i_1, which yields remainder r_1 = f(i_\ell, \dots, i_1) = a_i.
  • Verifying a proof involves checking f = a_i + \sum_{k = 1,\dots,\ell} q_k \cdot (x_k - i_k) holds, but doing so against the PST commitment to f and to the quotients.
    • This is done via \ell+1 pairings / bilinear maps.
    • Let c be the vector commitment to \textbf{a}
    • Let \pi_i = (w_1,\dots,w_\ell) be the proof for a_i, where w_k's are commitments to the relevant quotient polynomials q_k.
    • Then, a proof is valid iff. the following pairing equation holds:
      • e(c/g^{a_i}, g) \stackrel{?}{=} \prod_{k=1,\dots,\ell} e(w_k, g^{s_k - i_k})
  • Unfortunately, computing all n PST proofs as above is too slow, taking O(n^2) time. Instead, we observe that, if we compute proofs in the canonical order above, then proofs often “intersect” and actually determine a tree, which we call a multilinear tree (MLT). Computing this tree only takes O(n\log{n}) time.
  • To aggregate many proofs \{\pi_i\}_{i\in I}, where \pi_i = (w_{i,\ell},\dots, w_{i,1}), we prove knowledge of w_{i,k}'s that satisfy the pairing equation for each i\in I. For this, we use the recent generalization of Bulletproofs to pairing equations by Bunz et al. [BMM+19]
  • Our MLT is homomorphic: two trees for two vectors \textbf{a} and \textbf{b} can be homomorphically combined into a tree for their sum \textbf{a}+\textbf{b}.
    • This leads to very straightforward proof updates in the MLT, after the vector changes
    • It also has other applications, such as providing unstealability of proofs, a property which can be used to incentive proof computation.

Links:

2 Likes