Accumulators, scalability of UTXO blockchains, and data availability


#21

I didn’t get an email notification for your post. It probably wasn’t successfully posted (as opposed to getting deleted).

I’m a little bit confused by the trustless setup you are proposing. Would you mind editing your post to add details and cleanup the exposition? (I think there are several typos that makes things hard for me to follow.) Below are specific questions.

Is this actually a trustless setup, or are you proposing an N-party trusted setup?

What do you mean by “first primes”? Also, do you really mean M'=2^\frac{n}{2-1}=2^n?

Would you mind pointing to a specific construction/paper? I’m new to All-or-Nothing Disclosure of Secrets protocols.

Is this the same f as defined above, i.e. f(i_a, i_b) = (p_a + p_b) \times (q_a + q_b)?

What is \beta?

What is the B&F division test?


#22

sorry I didn’t mean to accuse anyone most probably a mistake then;

no, it was a mistake:
let M be the smallest product of first primes greater than 2^n and as an efficiency iprovement we can assume M' \approx 2^{n/2-1}

In general: https://link.springer.com/content/pdf/10.1007%2F3-540-47721-7_17.pdf
A specific construction: https://link.springer.com/content/pdf/10.1007%2F3-540-49649-1_28.pdf

no this one is for the ANDOS selling protocol.

see paper above, it’s the “merchandise” you obtained without letting the other party know what you choose.

Above paper has the test in section 3.
Note: The protocol above involves a thirdparty overall, but not for the primary test.


#23

For all these accumulators, every UTXO needs to update it’s witness or proof when a new block is added. This means that every coin holder needs to keep track of the blockchain always, which is not practical at all.

Is there any good solution to this issue? One way I see is that miners could provide witness service for normal users, but it’s not an ideal solution.


#24

Is there an implementation available for the RSA accumulators described here?


#25

An interesting possibility for RSA accumulators, or even bilinear accumulators, is to make the proof for STARKs much smaller by reducing the size and complexity of the proof for each item from \theta(\log n) to \theta(1). It doesn’t affect the number of proofs required, only the size of the proofs.

In this case we don’t necessarily care about the trusted third party problem of if the accumulator is updated and referenced by multiple people, all that is necessary is to prove that specific members exist within a set in a way which satisfies the computationally binding difficulty of modifying the set to meet a malicious goal, forging one proof is as difficult as forging all proofs or finding a collision in the underlying hashing algorithm.

e.g. with a Merkle tree, the entropy that selects which items from the computation trace that need to be verified are taken from the tree root, if any one item is changed the set of randomly chosen items will change. The proof of each item must be consistent with every other item, that is - their paths share common hashes and the modification of any one item requires the re-computation of the whole tree, which then changes the items which are selected (this is like, intractable computational difficulty).

Trusted setup doesn’t matter one person is providing a set of their own data to be verified by another person, if there were a Merkle tree which required the creator to be trusted to setup the parameters it wouldn’t affect the computational binding property of pseudo-random item selection.

So, the accumulator and witnesses, for a set of items S, is:

N = P * Q
A = 1
W = []
for _ in S:
    W = [pow(w, _, N) for w in W] + [A]
    A = pow(A, _, N)

Note that for every subsequent item added all previous witnesses need updating so that for any given (Witness,Value) pair you can do assert pow(Witness, Value, N) == A. This makes complexity of creating the accumulator about n \log n, where only 2n items are stored, the computational overhead is greater than that of a Merkle tree, but the memory overhead is the same (or possibly greater, given the larger size of the witness compared to the size of each hash)?

Then to select the items:

X = [] # Deterministically chosen samples of computation trace polynomial
B = A
for _ in range(0, n_samples):
    B = HASH(B)
    i = B % N
    X.append( (S[i], W[i]) )

return (A, N, X)   # Accumulator, modulo and random samples

Is there a mechanism where, if you’re creating an accumulator from a fixed sized batch of items, to reduce the computational cost of updating all previous witnesses at each step, without increasing the memory required (e.g. you can use a cheaper multiply operation, then a single POW at the end, but the aggregate cost is about the same)

Ah… Hold on…

What this doesn’t do is guarantee that any item is the randomly selected item within the set, it has no ordering, without guaranteed ordering there is no security as you can just provide any items you want as proof, as long as the set contains them.

To add ordering you’d require another level of structure, and this… I’m not sure how to achieve.

For accumulators to be more usable they would need strong ordering, with verifiability of ordering using only the witnesses. This would give them similar qualities to a Merkle tree, but is that possible with less data than Merkle tree proofs e.g. (\log n) * m * 32 where m is the number of paths, versus (\log_2N) * m where N is the modulus, assuming only \log_2N bits are required for each witness.


#26

This post RSA Accumulators for Plasma Cash history reduction on RSA Accumulators describes that we can build both proofs of membership and proof of non-membership. But the stored elements have to be primes or powers of a prime to avoid interaction.
I think that with this we could store an arbitrary number of ordered values (also non-primes) by adding p_1^{x_1} , ... p_n^{x_n}, with p_1, … p_n being the first n primes and x_1, … x_n being the values to store.
By proving membership of p_i^{x_i} and non-membership of p_i^{x_i+1} we show that x_i is on the i-th place in the set.