Truebit-protocol front running vulnerability

Since people here suggested using Truebit for several purposes, it is important that we understand security properties of the Truebit whitepaper

It seems that Truebit protocol is vulnerable to front running in the following way:

Truebit pays bounties to verifiers that successfully challenge incorrect computations.

The problem with the bounties is that parasitic verifiers can verify nothing, and simply wait for good guys to report errors.

Since everything on the blockchain is public, I can wait until the good verifier guy announces a challenge, see his transaction submitted to the blockchain, and challenge myself. Then we split the bounty, but the good guy did all the work.

See page 24 of the whitepaper

Verifiers who have posted minDeposit can challenge (the hash
of) solution until timeOut. Prior to timeOut, the Verifier must
broadcast the hash of an even integer to the blockchain in order
to commit to a challenge. Hashing an odd number in case of no
challenge is optional and may be used to camouflage real challenges
from other Verifiers (see Section 5.3). After timeOut, the
Verifier broadcasts to the blockchain this hashed number in the
clear to reveal her action.

Note that the good guy can “camouflage” the real challenge, but this “camouflaging” unfortunately does not work since the bad guy can simply broadcast the same hash as the good guy

This is easily solved by adding the address of the challenger in the things to be hashed, this way the hash will be different for everyone and you cannot just copy the hash.

3 Likes

Good point!)) This fixes the issue described above.

How about another problem then : in order to securely “camouflage” the good guy need to spends lots of money. If the bounty is included in every 1000th problem, then to be truly secure, the good guy needs to issue 999 fake camouflage calls and one true bounty call. If each camouflage call costs $1 (which may well be true having today’s gas prices), then the good guy needs to spend $999 to camouflage one single bounty call, which seems to be economically infeasible.

If, say, the good guy camouflages 1:1 (issues one fake call for each true call), then for the bad guy front running is still way more economical than honest verification - in other words the bad guy can make way more money verifying what the good guy points out to, than just randomly verifying all computations. So the good guy needs to either pay $999 per bounty for “camouflaging” or will have to split the bounty with the bad guy who is doing way less work.

On that I don’t have answer with the forced error.
I’m in favor of a multiple solvers/no verifier version of Truebit which is apparently similar to Zack proposal.
Basically everyone could solve problems and the reward each person gets would be r/(2^(n-1)) where r is the max reward and n the amount of solvers.
If everyone solution is the same great. Otherwise solvers would have to play a multiparty verification game.
Then the remaining would go back to a vault quite similarly to the jackpot.

I think kladkogex’s concern about costs does not disappear in TrueBit Beta (a.k.a. Zack’s construction). Whenever the sum of the Ethereum gas cost plus the electricity cost exceeds the reward payout, rational Verifiers will not participate. For sufficiently large computations, however, this overhead is negligible.

The “Incorrect secondary solution” attack in appendix A.1 of the TrueBit whitepaper addresses kladkogex’s second concern. If every Verifier alway challenges one of the Solver’s two solutions and also masks which task she is challenging, then front running becomes difficult.

1 Like

For the camouflage, it could be common with multiple computation. So in this case external observers would see the total amount of hashes commited on Truebit without being able to see which computation they refer to.
So we may not even need on-purpose camouflage. We’ll get camouflage as a by-product of having volume.

1 Like

I think a bad verifier could still folow a particular good verifier since all submissions are public.

A strategy for a bad guy to follow a good verifier and check all submissions that the good guy makes still seems to be more profitable than just being a good guy.

Something that make work as a defense for a good guy is to submit things very late to jump on a leaving train and leave no time for the bad guy to front run. A negative side effect of this could be that the good guy will end up missing the train, so that incorrect computations will be left unobjected …

Definitely what you dont want to have is an avalanche of submissions at the very end of timeout coming from good guys trying to avoid bad guys front running them …

The question is how to cleanly fix this. It may require a modification of the existing protocol, since currently, my understanding is that when a verifier commits to a challenge it does specify the solution being challenged. May be one direction is to look at things like oblivious transfer. Oblivious transfer allows on party to get an item from another party without the sending party knowing what item was sent … A verifier needs to somehow commit to a challenge without other people finding out which solution did the verifier commit to challenging …

Still learning TrueBit, so I may be off.

If there are a large enough number of tasks, and changing the algorithm so verifier challenges must hash a “TaskID” as well as the number - we could construct a “challenge pool” where verifiers provide proofs of committing to this specific task.

That way, a front-runner will not know which task is being challenged, and so would have to submit challenges to all tasks to ensure success. If the cost of submitting to many tasks is larger than the cost of being a legitimate verifier - front-running will not be profitable. (However, due to the small price and commitment and the large price of ongoing verification, this requires a significant amount of tasks to work).

Another thought, is to have off-chain commitment providers, which can be paid (through channels) to submit transactions for verifiers and which make more money off the reputation than they would off front-running and so are disincentivized to front-run.

1 Like

You can ask a deposit which is lost in case of incorrect challenges which would make challenging everything super costly.

There is no need of off chain commitment providers. You can require people to include their own address in the commitment such that it can’t be front run.
I mean if my address is 0x51ec49876fD0F9e4Bf72998fB24CD82e05802fBb and I want to commit challenging task 55, I would have to submit
hash(0x51ec49876fD0F9e4Bf72998fB24CD82e05802fBb,55,9565655648789), with 9565655648789 being the salt.
When I reveal, I submit (55,9565655648789), the smart contract verifies hash(0x51ec49876fD0F9e4Bf72998fB24CD82e05802fBb,55,9565655648789) == commitment and 0x51ec49876fD0F9e4Bf72998fB24CD82e05802fBb == msg.sender.

So if someone copies me he won’t be able to reveal, as the commitment he would have needed to make is different than mine.

1 Like

Great idea - Penalizing incorrect challenges by itself wouldn’t be enough though - as we’re hiding the TaskID and so a front-runner can just decline to reveal them. However, if we penalize both incorrect challenges, and non-revealed challenges as well- we should get to a point where attackers lose more money than they gain.

This wouldn’t hurt camouflage as verifiers can always commit an odd number and then reveal that (also a nice way to troll front-runners).

Sounds like between this and putting addresses into commitments we protect against both the “challenge when good guys challenge” and “copy the hash” attacks.

1 Like

Unfortunately the bad guy does not have to challenge averything. The bad guy follows submissions of a good guy, verifies them and only challenges if the bounty is found …

That is mitigated as described above (in multi task environments) by having the task id hashed in the commitment, but not readable. That means an attacker doesn’t know which task is being challenged- requiring them to verify all active tasks. As long as there are enough tasks such that a task is being challenged at any point in time, attackers need to verify all tasks and front running becomes equivalent to verifying.

In single task environments, you’re totally right.

2 Likes

I agree that this helps ! :smile: (although this would require reworking of the Truebit whitepaper since it seems will break the current algorithm which assumes that it is clear from the beginning who is challenging what :slight_smile:

But unfortunately this would not provide 100% help and would still leave lots of potential for front running for the following reason - the good guy needs to download tasks before executing them, and the fact of a download is public

The bad guy could trace what good guys are downloading and deduce which tasks need to be verified.

As an example if you see three good guys downloading tasks:

Good Guy 1 - T1, T5, T7
Good Guy 2 -T3, T5, T18
Good Guy 3 - T2, T5, T21

And then the bad guy sees that all three of them are challenging something, then, well, you kind of suspect Task 5

Lots of potential for machine learning to make money on this thing!
:smiling_imp:

The download of a task and its challenging are not necessarily linked.
Of course you could try to find out the IP of the first broadcaster of a TX, but that’s seems hard. And you could broadcast the TX via a third party to prevent that.

1 Like

It all becomes a very convoluted problem for a variety of reasons.

One of the reasons is the following: the good guys will probably not want to download new code each time they execute things. Every good verifier node will probably want to concentrate on a particular task/app to repeatedly verify.

In other words, if there are 1000 apps to verify on the verification market, each particular verifier node will probably specialize on a particular fixed set of apps, download code for each app once and then repeatedly execute the same code. Some verifiers will repeatedly verify apples, and some oranges, but each verifier will probably be strongly bound to a particular set of apps since downloading code is expensive and introduces network latency.

So by monitoring a good guy, the bad guy may find out what apps the good guy is monitoring, so when the good guy submits an request, the bad guy will suspect which task this request applies to.

Then they can submit some commitments not to challenge once in a while (as described in the paper). This way if the bad guy tries to copy them, he will have his deposit forfeited.

1 Like

Good point!

I think what is needed is Version 2.0 of Truebit paper to address these things :slight_smile:

Commitments not to challenge are already in the paper. But not the inclusion of msg.sender in the things to hash nor hiding the task challenged.
@truja

1 Like

The TrueBit whitepaper discusses masking Task ID in the appendix at the end of the section called, “Incorrect secondary solution.” Moreover, the protocol can mask which of the Solver’s two solutions is being challenged (see same section). Masking these two items makes front running more difficult.

IMHO once you release the production network you will have lots of people trying to hack you guys. There are lots of smart people in this world, and especially if there is a chance to make money you will be attacked mercilessly. Think about hungry MIT and Stanford grad students :slight_smile:

It may make sense to address all security problems as early as possible even if they are not significant from your point of view …