On-chain scaling to potentially ~500 tx/sec through mass tx validation

Is it possible to also structure layer2 exec engine’s transaction logging on base chain using zksnark? This also seems like a promising direction.

Have you considered using aggregated signatures like this http://crypto.stanford.edu/~dabo/papers/aggreg.pdf

I have a feeling you could aggregate signatures first, and then do a SNARK on the aggregated signature (in your scheme you are doing SNARK on all ECDSA signatures of all transactions, correct me if I am wrong). The SNARK generation time could be faster for the aggregated signature verification vs verification of each ECDSA.

Another potential improvement is to combine relayers with miners. If miners are relayers, than it seems that you do not need to reserve slots because mining already provides a way to resolve concurrent access.
In other words, only the miner of a particular block can be a relayer for the transactions batched in this block.

Yet another potential improvement is to count zksnark proof computation performed by the miner towards the PoW of the block. In other words the PoW of a block would be the regular PoW + ZKSnark PoW. In this way you essentially get the ZKSnark computation for free, since the miner would correspondingly decrease the regular PoW part.

ZKSnarks can be parallelized as described here

Since the miner anyway gets paid several thousand USD for each block, the miner could spend money to do a highly parallel computation so the proof would be generated much faster in time less than the block time.

If I’m not mistaken, this requires L1 support. I’m wondering if there are other L2 solutions for the collision problem. Various L2 execution market protocols I’m aware of face the same challenge ( Ethereum Alarm Clock, meta-txes, batched NFTs, event oracle, etc.).

Particularly, if the recent research into RANDAO and VDFs could offer insight here, for randomly selecting a batch tx proposer for a short period.

1 Like

I have. However, an N-message N-signer aggregate signature takes N+1 pairings to verify, so it’s just as hard (taking into account constant factors even harder) to verify than this, and also it cannot prove Merkle tree updates.

The other low-tech solution here is to use an aggregate signature and just publish a new Merkle root with a security deposit, allow any relay transaction to specify and previous relay transaction as a parent, and have the contract maintain a kind of “internal blockchain” of relay transactions; the fork choice rule is “recursive first published” ie. if any parent has multiple children the child that was published earlier wins, but the fork choice would only be “finalized” after two weeks; in the meantime, anyone can challenge any computation that was made and claim a reward, in which case the entire subtree of that computation would be thrown out. I think that’s worth looking into as well, though it does have a liveness assumption.

In this case miners probably would not mind a L1 update since it will potentially bring them additional money. Also it seems that if miners are doing wasteful computations anyway and zksnarks require lots of computations, it is good to replace waste with zksnarks. I think one should not be totally scared of changing the main chain …

Interesting thread! I believe this direction has lots of merit for scaling blockchains. It could be made more efficient (2 orders of magnitude) if you use STARKs instead of SNARKs:

  1. The gas-cost-per-public-input of STARK is ~ 100X smaller than that of SNARK, meaning one can go with Vitalik’s initial, simpler, solution, rather than using a more complex NP statement in which the inputs are treated as part of the private input of the SNARK. (explanation: If there are N inputs, each input costs roughly 2 log N field operations for the STARK verifier, translating to less than 400 gas even when N=1,000,000. A SNARK verifier costs an elliptic curve operation for each input, and that’s ~ 40,000 gas).

  2. As pointed out, proving time is the main bottleneck. Our (StarkWare’s) work-in-progress STARK prover is at least an order of magnitude faster than the fastest SNARK prover and we expect further improvements down the line. tl;dr 500 tx/sec should be doable with STARKs.

We’ll provide more details at the zk-STARK workshop in DevCon4, including exciting updates on proof size.


Here is an idea how to fix it:

a) users submit transactions to the relayer
b) the relayer concatenates the transactions and sends back the concatenated transaction CT to all users
c) every user signs the CT and sends the signature back
d) the relayer concatenates the signatures into a Boldyreva BLS multisignature - note that Boldyreva multisignature verification takes constant time because the multisignature is on the same message
e) Then you do ZKSNARK on the Boldyreva multisignature verification which will be constant time (not linear in n). Hopefully this ZKSnark will be much faster since it will not be linear in the number of users.

Note that if one of the users witholds the signature on the concatenation it is not a problem -the relayer will wait for a timeout T and then create a subgroup multisignature for the users that responded.

Nice :-))) Can the be done in parallel?? I think SNARKS can be done in parallel

Is the zk-STARK workshop going to be inside or outside of devcon4 walls? We have some crypto developers from our company coming to Prague but they were not able to buy a ticket …

I don’t think it requires a internal blockchain with a fork choice rule, because the contract can prevent a split and incorrect blocks can be rolled back.

I was thinking about the following:
Relayers register with a deposit. A relay block consists of the transaction data, the hash of the transaction data, previous state Merkle root and the new state Merkle root. The contract checks the hash of the transaction data and if the previous Merkle root matches.

When the block is incorrect, anybody can submit a proof and rollback the incorrect block and all blocks following it, collecting the deposits of the relayers.

For withdrawals we will need a waiting period (a week?). If you are in a hurry, you can offer a small fee, so that somebody else can volunteer to do the waiting for you. Withdrawals on the queue will be rolled back as well when they are based on an incorrect block, so the volunteers need to make sure the chain is correct

For every-bodies money to be save, one honest user needs verify the chain, once a week.

What about proof sizes as bottleneck? In Vitalik’s proposal STARKs would have to be included in root chain transactions and they are orders of magnite larger than SNARKs. Does StarkWare address this?

Which is a DOS vector: N malicious Sybil accounts can delay block processing by N*T and you can not prove that they act maliciously. How to solve this?

I have been thinking about this some more and I think that if we introduce just a little crypto economics, we can get around the proving time problem and the relayers needing super computers.

A relayer can post a block containing the transaction data, old/new state merkle roots and an aggregated signature of the transaction signatures. No ZK-SNARK proof is need at that time, but the relayers will have to put up a deposit.

Now the relayer can calculate the ZK-SNARK proof on his regular computer and post it when ready to collect his deposit and transaction fees. If the relayer has not done this after a certain amount of time that should be enough to build the proof, everybody is allowed to submit a proof and collect the transaction fees together with a small bonus (to cover the gas fees of the proof). The deposit minus the bonus will be returned to the relayer. Note that not just the block, but also all blocks before it need to be proved before the deposit and the transaction fees are released.

During the calculation of the proof everybody can verify the transactions, so the chain can continue. If a block is invalid, everybody can post a proof that this is indeed the case, rolling back the block and collecting the deposit of the relayer.

Withdrawals will have to wait in a queue until all blocks they depend on are proved.

This is not really true - you broadcast the concatenated message to everyone and wait for time T only once. People with reply with signature shares and you form a group signature of those guys who replied.

Don’t you need to resend them a new concatenated transaction for the newly formed group? As I understand, since everybody signs the same message and you only verifiy it once, signatures must come from all owners for the concatenated transaction to be valid.

Good direction of thought :slight_smile:

Not actually true - if some of the guys dont sign, the concatenated message stays the same but signed by less people. The transactions of the people that did not sign stay in concatenation but get ignored by the smart contract.

This is an improvement (though at the cost of being interactive) but it doesn’t solve the other issue, which is how to prove correctness of Merkle tree updates. I think that requires a ZK-SNARK or similar unfortunately.

I don’t think it requires a internal blockchain with a fork choice rule, because the contract can prevent a split and incorrect blocks can be rolled back.

Ah, I think you’re right. The point of the fork choice rule was that if a given submission is invalid and others can see that it’s invalid, they can build an independent chain without waiting for the original chain to be challenges, and have common knowledge that everything on the independent chain is finalized. However, you could just require whoever is making the next submission to challenge the previous one first. This way the incentives would be better too, as there would be a set of parties with an existing clear interest in policing the contract for bad submissions and sweeping them away.

In Vitalik’s proposal STARKs would have to be included in root chain transactions and they are orders of magnite larger than SNARKs. Does StarkWare address this?

It’s worth noting that with a blockchain there are special-purpose tricks you can do to reduce the size of a STARK by maybe ~2/3, making it viable to fit inside one block. Basically, you turn it into an interactive game, (i) prover publishes merkle root, (ii) prover waits for a block hash, (iii) prover publishes data. This removes an attacker’s ability to “grind” bad proofs until eventually one passes the verifier by random chance, and so reduces the number of checks that the STARK needs to make for the proof system to be secure. Also, there are versions of STARKs that use more advanced cryptography than hashes and eke out more efficiency gains out of that.

How does the ZKSNARK handle incremental changes to the Merkle tree - what gets recomputed if the tree changes?

I wonder if using things like Merkle mountain ranges or some other more advanced crypto accumulators could improve the prover computation speed …

Also how about letting miners compute the proofs and count ZKSNARKS towards the PoW?:wink: It also removes the problem of reserving the slots. Miners make a lot of money per block, they can just buy lots of machnines on AWS to calculate the proof in several seconds

1 Like

How does the ZKSNARK handle incremental changes to the Merkle tree - what gets recomputed if the tree changes?

Suppose the transaction modifies the balances of accounts A[1]…A[n]. Basically, the ZK-SNARK should prove that there exists a witness W that proves the pre-state prev_A[1] … prev_A[n] is part of the Merkle root in the current state, and that if you recalculate the new Merkle root using the data in W and the given modifications, the result is whatever is specified as the new merkle root. All of this needs to be done inside the SNARK.

Nice, so the randomness needed to pick the points in the proof comes from the next block hash, a source the prover not does control or know in advance.

Why would you only publish the merkle root in step (i)? If you would publish all the transactions in step (i) you are able to include more transactions in a single block, because the proof is coming in a seperate block.

This can easily be incorporated in this scheme above, because the proofs are already coming at a later time.

Ah, I see. We could pass a bitmap of participants and add up public keys from storage to obtain the compound key pretty efficiently in EVM, leaving us with just one BLS verification. Right? For 1000 transactions that would indeed spare a lot of gas even without precompile. Cool!

Then the last problem: if you always submit the full CT, who is going to pay gas for unexecuted transactions (68 per byte of tx)? Plus a static overhead of BLS signature verification (200-300K).

1 Like