An interesting possibility for RSA accumulators, or even bilinear accumulators, is to make the proof for STARKs much smaller by reducing the size and complexity of the proof for each item from \theta(\log n) to \theta(1). It doesn’t affect the number of proofs required, only the size of the proofs.
In this case we don’t necessarily care about the trusted third party problem of if the accumulator is updated and referenced by multiple people, all that is necessary is to prove that specific members exist within a set in a way which satisfies the computationally binding difficulty of modifying the set to meet a malicious goal, forging one proof is as difficult as forging all proofs or finding a collision in the underlying hashing algorithm.
e.g. with a Merkle tree, the entropy that selects which items from the computation trace that need to be verified are taken from the tree root, if any one item is changed the set of randomly chosen items will change. The proof of each item must be consistent with every other item, that is - their paths share common hashes and the modification of any one item requires the re-computation of the whole tree, which then changes the items which are selected (this is like, intractable computational difficulty).
Trusted setup doesn’t matter one person is providing a set of their own data to be verified by another person, if there were a Merkle tree which required the creator to be trusted to setup the parameters it wouldn’t affect the computational binding property of pseudo-random item selection.
So, the accumulator and witnesses, for a set of items
N = P * Q
A = 1
W = 
for _ in S:
W = [pow(w, _, N) for w in W] + [A]
A = pow(A, _, N)
Note that for every subsequent item added all previous witnesses need updating so that for any given (Witness,Value) pair you can do
assert pow(Witness, Value, N) == A. This makes complexity of creating the accumulator about n \log n, where only 2n items are stored, the computational overhead is greater than that of a Merkle tree, but the memory overhead is the same (or possibly greater, given the larger size of the witness compared to the size of each hash)?
Then to select the items:
X =  # Deterministically chosen samples of computation trace polynomial
B = A
for _ in range(0, n_samples):
B = HASH(B)
i = B % N
X.append( (S[i], W[i]) )
return (A, N, X) # Accumulator, modulo and random samples
Is there a mechanism where, if you’re creating an accumulator from a fixed sized batch of items, to reduce the computational cost of updating all previous witnesses at each step, without increasing the memory required (e.g. you can use a cheaper multiply operation, then a single POW at the end, but the aggregate cost is about the same)
Ah… Hold on…
What this doesn’t do is guarantee that any item is the randomly selected item within the set, it has no ordering, without guaranteed ordering there is no security as you can just provide any items you want as proof, as long as the set contains them.
To add ordering you’d require another level of structure, and this… I’m not sure how to achieve.
For accumulators to be more usable they would need strong ordering, with verifiability of ordering using only the witnesses. This would give them similar qualities to a Merkle tree, but is that possible with less data than Merkle tree proofs e.g. (\log n) * m * 32 where m is the number of paths, versus (\log_2N) * m where N is the modulus, assuming only \log_2N bits are required for each witness.