Erasure coding (Reed-Solomon) for Plasma data availability

I’m writing this posts for two reasons:

  1. I feel like erasure coding is not discussed enough as a data availability solution for Plasma implementations
  2. Yesterday, @vbuterin and @musalbas published a paper which proposes an interesting data availability scheme based on Reed-Solomon erasure coding. Their blockchain model uses SMTs (the state is represented as a key-value map), so it can support both UTXO and account-based chains.

I know erasure coding gives probabilistic data availability guarantees, but those can be really high, as shown in the section 5.6 of the paper. Why Plasma folks are not considering this more often? Is it hard to implement? Something else?

Two main challenges of Plasma are ensuring correct transaction execution (preventing incorrect/malicious transactions) and ensuring data availability. It seems like SNARKs/STARKs should take care of the former in future, and erasure coding might be an elegant solution for the later (most implementations currently require users to download whole chains, which is quite a burden)?

Hope this can start a constructive discussion. :slight_smile:

2 Likes

So basically a Plasma MVP architecture where clients use erasure coding-based data availability sampling in place of downloading the full data?

Sounds very reasonable to me!

1 Like

Exactly. :slight_smile: I guess it could also be used for never, EVM-like Plasma proposals, like Plasma Snapp, Quark-gluon Plasma and Plasma EVM 2.0 (they even had to make changes to their original design because it couldn’t guarantee data availability). :thinking: Basically, in any implementation which so far required users to download the whole chain…

SPV is always on a table in Plasma implementers discussion, it’s a very important part of the UX, although the “data availability” problem should be separated and clarified:

  • Block withholding - in this case a Plasma operator (or a set of operators) commits to the new state or a new block header on-chain, but never publishes a block in full. So it’s never available in the network, even if every relaying node is honest
  • Dishonest full nodes - in this case a paper linked above helps to propagate a valid (and fully available somewhere) data to as many participants as possible and give proper guarantees for SPV
3 Likes

Thanks for the comment, @shamatar. :slight_smile:

I’m aware SPV is being mentioned quite often, but I haven’t heard erasure-coding being discussed (I haven’t listen all the implementers calls yet, though).

I think erasure coding could help in both cases you described. We can have two erasure coding-based sampling processes happening simultaneously - one that is checking only the last few blocks, and the other checking the whole chain. Somebody can correct me if I’m wrong, but I believe this way the light Plasma clients can have a really high guarantees that all the data is constantly available, without ever having to download a single block! :slight_smile: Now we can imagine Raspberry Pi and IoT devices as light Plasma clients. :wink:

Say the whole blockchain is a gigabyte. You’d need enough clients querying the erasure code for the whole block-chain. If you updated the erasure code every day, you’d have to have the clients sample gigabytes worth of data every day. Not sure how feasible that is.

1 Like

Hi Mihailo,

I am not sure how erasure coding can help block withholding case. If the block had never been available on the network (intentionally), how could we sample from the data?

Thanks,
Jaehyung

2 Likes

I didn’t really get this, please expand?

In the worst case, sampling of the whole chain can been done only as a part of the first boot-up (sync) of a light client. Then we can increase the frequency from that, depending on the resources each client is willing/able to invest. Speaking of that, I don’t really know how resource(bandwidth/CPU/memory)-intensive this sampling is? If you have any knowledge/experience/data on this, can you please share? Thanks! :slight_smile:

We can not, that’s exactly how we’ll see the data is missing (or I am the one missing something :slight_smile: )?

Then is the following claim true? Erasure coding cannot resolve data availability issues caused by block withholding (but may resolve issues from dishonest full nodes).

1 Like

As I’ve said, my understanding is that it help in both cases, and I’we explained why (if you do the sampling and you can’t fetch some data, you know that data is unavailable?). It would be great if someone who is experienced with erasure coding could weigh in on this.

If the person who created the block could not

The creator of the erasure code could respond to requests on the network for pieces of the erasure code, making it seem that more data is available than the sampling scheme would suggest. Eg You sampled 3 pieces and the block creator returns the 3 pieces, but the rest of the block is unavailable.
The two ways to circumvent this would be (1) to ensure that the operator cannot respond to requests for pieces of the erasure code- difficult if the operator has a lot of sybil identities on the network; (2) have enough light clients sampling that something like half the erasure code is sampled by the light clients. Even then you run into the issue that some light clients can be fooled into accepting a block as available when it is not - and you have to add some extra privacy features. It’s all detailed here: https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding

1 Like