Yep, this could be an interesting optimization. I’d be very curious to see what kind of participation checkpoints have! My guess was we’d actually see pretty limited participation.
I be interested in learning how you sync up those delay periods. As we have developed offline mobile payment solutions based on Raiden, though like any development works, it’s forever improving and problem solving.
I think there is a slight change in assumptions. With Plasma MVP, the user can delegate the task of watching the chain and initiating withdrawals to someone else, so the user does not actually have to be online regularly. The delegate could grief the user by initiating unnecessary withdrawals, or could fail to do their job, but could not steal the user’s coins.
In Plasma XT, it seems like a user could not delegate the checkpointing task while they are offline to an untrusted entity, because they would have to provide their private key to that entity for the purpose of signing off on checkpoints.
Do you see a solution that would enable checkpoints to be made while the user is offline, or would they just have to allow the coin history to grow and then make a new checkpoint when they do eventually come online?
I would agree that there is a change of assumptions w.r.t Plasma MVP. I don’t believe that this changes any assumptions w.r.t Plasma Cash, particularly because of the discussion here: Watchtowers may not work in Plasma (Cash).
Although I think it’s probably fine for a first implementation if users have to be online, it would definitely be very useful for users to be able to outsource this. I haven’t thought about it a lot - one solution off the top of my head would be for users to make an on-chain transaction that somehow specifies a third party that can sign off on the checkpoint instead. This would basically look like a 2of2 multisig with a special key (for the third party) that can sign off alone. The third party could then be sure that they haven’t signed off and challenge whenever they see a false positive.
Of course, this means that the third party could temporarily grief the user by refusing to sign the signature. Maybe this isn’t the worst thing if users can easily submit another transaction changing their third party provider.
If the user is able to unilaterally change their checkpointer on the Plasma chain, I don’t think the checkpointer can safely challenge a checkpoint, since the user could have maliciously changed their checkpointer (assuming as we always do that the operator is working against them and making data unavailable). So you need to give the checkpointer the ability to veto (though you could still retain sole authority to withdraw the coin).
Also, note that what you’re describing is a solution for outsourcing challenging of checkpoints, rather than outsourcing of checkpointing. I agree that the former is more important (because checkpointing is optional and if you have some time you could always do it before you plan to spend the coin, in order to reduce the proof you have to transfer). I’m not really sure the latter is possible, because for them to sign off on an unavailable checkpoint would be an unattributable fault.
You failed to mention how/when are checkpoints finalized? Is it when the operator collects and publishes (in form of a bitfield) signatures from 50%+ of users? If I’m not mistaken, such a model would open up a space for sybil attacks (e.g. Plasma operator creates a lot of coins of a small value -> reaches majority -> easily finalizes faulty checkpoints)? A way to mitigate this could be to finalize checkpoints only when 50%+ of VALUE of coins is signed, but I haven’t thought about how hard would be to implement that…
Exits are finalized after a specified period of time (e.g. two weeks).
So, the state of the whole Plasma chain will be finalized after that period of time, regardless of the number of signatures (1s) in a bitfield?
Isn’t it trivial for a Plasma operator to finalize invalid checkpoints in this model? E.g. Plasma operator ‘proposes’ an invalid checkpoint and withholds it -> honest users don’t sign the checkpoint (they can not see it) -> operator creates a certain number of sybil accounts/coins and signs the checkpoint -> operator publishes the checkpoint with a bitfield (contains 0s from honest users and 1s from sybil accounts) -> no one can challenge the checkpoint (the bitfield is fine) so it gets finalized? This can be done in even simpler way (without withholding or sybil accounts) but I believe my example is easy to understand and has a high chance of success.
The number of 1s in the Bitfield is irrelevant. A 1 at some position simply asserts that the owner of that coin has signed off on the checkpoint. If the owner has not actually signed off, then the owner can challenge the checkpoint and take some bond from the operator.
Now I’m not sure if I got your concept right. Can you please answer which of these two statements is true in it:
- After a checkpoint, the whole state of a Plasma chain (the state of every coin/account/user) is considered finalized.
- After a checkpoint, only the state of coins/account/users that have 1s in the bitfield is considered finalized.
I was thinking your model assumes no.1 and the attacks I’ve mentioned are derived from that.
Yep! This is the model. Although it’s more like coins with a 1 in the bitfield that haven’t been challenged.
Ah, that makes sense, thanks!
And this part is critical, because without an option to challenge 1s it would be trivial for an operator to finalize invalid balances of her accounts.
You might want to consider adding these clarifications to your original post.
I think your model is a solid improvement on the current state of the art. Well done.
Unfortunately I believe the post is too old to be edited or deleted . Hopefully people will read through these comments.
Here is a suggestion for improving the bitmask. As noted in the post, even a simple bitmask can be large for the main chain. The Checkpoints Zones suggested alleviates the problem, but it does not allow for very heterogeneous checkpoint rates (some users may want to checkpoint every day, others every year).
My suggestion assumes that checkpoints are very sparse, meaning that in each plasma block only a small proportion of users want to checkpoint (although some users may do it frequently).
Assuming sparsity, we can of course compress the bitmask quite a lot. The first method that comes to mind is some sort of Huffman coding, but in is very bad for the random access needed to prove inclusion in case of disputes.
Another option already mentioned here was to use Bloom filters, but it was noted that it may contain false positives, resulting in the loss of the deposit by the operator.
Another alternative was to use an inverted Bloom filter. This is cool, because false negatives are not harmful (if the user got unlucky, she can try again later). The problem with this approach is that the negative of the mask (the set of user who do not want to checkpoint) is far from sparse. Therefore the Bloom filter will not scale.
My suggestion is to iterate a Bloom filter with an inverted one. First use a simple Bloom filter on the addresses requesting a chechpoint, resulting in the vast majority of bits being correctly predicted. Now, the few bits that did not match (the false positives) can be used to create another sparse Bloom filter used for elimination.
In the end, an address is included in our mask if it tests positive for the first Bloom filter, but not for the second (both are sparse). Legit requests could unhapilly pass the second filter and be therefore excluded, but these users can try again later.
What do you think?
We’re working on Plasma checkpoint for the segment.
The definition of the segment in our Plasma is almost same as Plasma cashflow.
The segment is 8bytes tokenId, 8bytes start and 8bytes end.
At first, I was implementing signature bitmaps for segments, but it was too large because the max segment size of our Plasma is 2^48. So I used
challengeTooOldExit function instead of State Merkle Tree.
- We have a checkpoint at certain plasma block number and certain segment. I describe the process of determining checkpoint later.
The structure of checkpoint is this.
# 8bytes tokenId, 8 bytes start, 8bytes end
- We should define the newest transaction before this checkpoint.
RootChain contract define “the newest transaction before this checkpoint” here.
- If all exits before “the newest transaction before this checkpoint” can be challenged by this transaction. Thus, we don’t need the history of the segment before “the newest transaction before this checkpoint”.
I describe the checkpointing process.
- The operator requests a checkpoint with Plasma block number and segment.
- To avoid “mass-exit” scenario, one owner in the segment at requesting checkpoint can challenge this checkpoint request. To prove that owners really has the segment, they should succeed to exit their segment. Not all owners need to exit their segment, only one owner can challenge checkpoint.
Checkpoint contract use
exitId as id of challenge for checkpoint.
- The operator can respond challenge showing a signature from the segment owner.
This signature includes information of the checkpoint. It is the evidence that segment owners allowed this checkpoint. Segment owners should check history from the previous
checkpoint.blkNumbefore signing off new checkpoint. And operator should collect all signatures within the segment of requesting checkpoint.
- The operator can finalize checkpoint after a period. This period is 3 times of exit period.