On-chain scaling to potentially ~500 tx/sec through mass tx validation

Here is an idea how to fix it:

a) users submit transactions to the relayer
b) the relayer concatenates the transactions and sends back the concatenated transaction CT to all users
c) every user signs the CT and sends the signature back
d) the relayer concatenates the signatures into a Boldyreva BLS multisignature - note that Boldyreva multisignature verification takes constant time because the multisignature is on the same message
e) Then you do ZKSNARK on the Boldyreva multisignature verification which will be constant time (not linear in n). Hopefully this ZKSnark will be much faster since it will not be linear in the number of users.

Note that if one of the users witholds the signature on the concatenation it is not a problem -the relayer will wait for a timeout T and then create a subgroup multisignature for the users that responded.

Nice :-))) Can the be done in parallel?? I think SNARKS can be done in parallel

Is the zk-STARK workshop going to be inside or outside of devcon4 walls? We have some crypto developers from our company coming to Prague but they were not able to buy a ticket …

I don’t think it requires a internal blockchain with a fork choice rule, because the contract can prevent a split and incorrect blocks can be rolled back.

I was thinking about the following:
Relayers register with a deposit. A relay block consists of the transaction data, the hash of the transaction data, previous state Merkle root and the new state Merkle root. The contract checks the hash of the transaction data and if the previous Merkle root matches.

When the block is incorrect, anybody can submit a proof and rollback the incorrect block and all blocks following it, collecting the deposits of the relayers.

For withdrawals we will need a waiting period (a week?). If you are in a hurry, you can offer a small fee, so that somebody else can volunteer to do the waiting for you. Withdrawals on the queue will be rolled back as well when they are based on an incorrect block, so the volunteers need to make sure the chain is correct

For every-bodies money to be save, one honest user needs verify the chain, once a week.

What about proof sizes as bottleneck? In Vitalik’s proposal STARKs would have to be included in root chain transactions and they are orders of magnite larger than SNARKs. Does StarkWare address this?

Which is a DOS vector: N malicious Sybil accounts can delay block processing by N*T and you can not prove that they act maliciously. How to solve this?

I have been thinking about this some more and I think that if we introduce just a little crypto economics, we can get around the proving time problem and the relayers needing super computers.

A relayer can post a block containing the transaction data, old/new state merkle roots and an aggregated signature of the transaction signatures. No ZK-SNARK proof is need at that time, but the relayers will have to put up a deposit.

Now the relayer can calculate the ZK-SNARK proof on his regular computer and post it when ready to collect his deposit and transaction fees. If the relayer has not done this after a certain amount of time that should be enough to build the proof, everybody is allowed to submit a proof and collect the transaction fees together with a small bonus (to cover the gas fees of the proof). The deposit minus the bonus will be returned to the relayer. Note that not just the block, but also all blocks before it need to be proved before the deposit and the transaction fees are released.

During the calculation of the proof everybody can verify the transactions, so the chain can continue. If a block is invalid, everybody can post a proof that this is indeed the case, rolling back the block and collecting the deposit of the relayer.

Withdrawals will have to wait in a queue until all blocks they depend on are proved.

This is not really true - you broadcast the concatenated message to everyone and wait for time T only once. People with reply with signature shares and you form a group signature of those guys who replied.

Don’t you need to resend them a new concatenated transaction for the newly formed group? As I understand, since everybody signs the same message and you only verifiy it once, signatures must come from all owners for the concatenated transaction to be valid.

Good direction of thought :slight_smile:

Not actually true - if some of the guys dont sign, the concatenated message stays the same but signed by less people. The transactions of the people that did not sign stay in concatenation but get ignored by the smart contract.

This is an improvement (though at the cost of being interactive) but it doesn’t solve the other issue, which is how to prove correctness of Merkle tree updates. I think that requires a ZK-SNARK or similar unfortunately.

I don’t think it requires a internal blockchain with a fork choice rule, because the contract can prevent a split and incorrect blocks can be rolled back.

Ah, I think you’re right. The point of the fork choice rule was that if a given submission is invalid and others can see that it’s invalid, they can build an independent chain without waiting for the original chain to be challenges, and have common knowledge that everything on the independent chain is finalized. However, you could just require whoever is making the next submission to challenge the previous one first. This way the incentives would be better too, as there would be a set of parties with an existing clear interest in policing the contract for bad submissions and sweeping them away.

In Vitalik’s proposal STARKs would have to be included in root chain transactions and they are orders of magnite larger than SNARKs. Does StarkWare address this?

It’s worth noting that with a blockchain there are special-purpose tricks you can do to reduce the size of a STARK by maybe ~2/3, making it viable to fit inside one block. Basically, you turn it into an interactive game, (i) prover publishes merkle root, (ii) prover waits for a block hash, (iii) prover publishes data. This removes an attacker’s ability to “grind” bad proofs until eventually one passes the verifier by random chance, and so reduces the number of checks that the STARK needs to make for the proof system to be secure. Also, there are versions of STARKs that use more advanced cryptography than hashes and eke out more efficiency gains out of that.

How does the ZKSNARK handle incremental changes to the Merkle tree - what gets recomputed if the tree changes?

I wonder if using things like Merkle mountain ranges or some other more advanced crypto accumulators could improve the prover computation speed …

Also how about letting miners compute the proofs and count ZKSNARKS towards the PoW?:wink: It also removes the problem of reserving the slots. Miners make a lot of money per block, they can just buy lots of machnines on AWS to calculate the proof in several seconds

1 Like

How does the ZKSNARK handle incremental changes to the Merkle tree - what gets recomputed if the tree changes?

Suppose the transaction modifies the balances of accounts A[1]…A[n]. Basically, the ZK-SNARK should prove that there exists a witness W that proves the pre-state prev_A[1] … prev_A[n] is part of the Merkle root in the current state, and that if you recalculate the new Merkle root using the data in W and the given modifications, the result is whatever is specified as the new merkle root. All of this needs to be done inside the SNARK.

Nice, so the randomness needed to pick the points in the proof comes from the next block hash, a source the prover not does control or know in advance.

Why would you only publish the merkle root in step (i)? If you would publish all the transactions in step (i) you are able to include more transactions in a single block, because the proof is coming in a seperate block.

This can easily be incorporated in this scheme above, because the proofs are already coming at a later time.

Ah, I see. We could pass a bitmap of participants and add up public keys from storage to obtain the compound key pretty efficiently in EVM, leaving us with just one BLS verification. Right? For 1000 transactions that would indeed spare a lot of gas even without precompile. Cool!

Then the last problem: if you always submit the full CT, who is going to pay gas for unexecuted transactions (68 per byte of tx)? Plus a static overhead of BLS signature verification (200-300K).

1 Like

Ah sorry I meant to say publish all the transactions and the merkle root in step 1, then the STARK in step 2.

3 Likes

Hi,

I am not familiar with zk-snark and have some basic questions. In this case, the verifier is a smart contract on Ethereum blockchain right? As far as I understand, there is a random sampling seed lambda which should be generated by the verifier and should not be exposed to the prover (relayer). If the verifier code is on the blockchain, how do we prevent relayers from accessing the lambda and generating a fake proof?

Thanks,
Jaehyung

This sounds much and does not require zkSNARKS or am I missing something here?

Hey, guys. Is anyone has already implemented this? we wanted to take it as a hackathon tasks for ETHSanFrancisco.

Hey, guy. Is anyone has already implemented this? we wanted to take it as a hackathon tasks for ETHSanFrancisco

So https://github.com/barryWhiteHat/roll_up is pretty close to what you will need to build it. All you have to do is add the sha256 compression in the EVM and the snark. I am happy to help remotely. we can do a call about it. If you join our gitter we can coord :slight_smile: we also have a bunch of people there who may want to join your effort.

We are also building our own proposal to add data availability guarantees outside the EVM but the core of what we need to build will be the same.

1 Like

I was asking my colleagues and one of them pointed me to the following article:

So, a manual one time setup process will solve the problem, or I guess existing zcash public parameters can be reused…

Edit: I figured it’s not possible to reuse zcash parameters…I am eager to join if we do something similar!