Minimal Viable Plasma


You could also play a game where random participants are asked and incentivized to confirm block availability.

The participants selected for the n+1 block are chosen at the nth block. Before the n+1 block is accepted it must be signed or be submitted with signed proofs of specific utxo’s being included in the block by the participants selected in the nth block.


The problem with a design like this is that it break Plasma guarantees by making the protocol dependent on a secondary consensus mechanism. In this example, it’s possible for the operator be (or collude with) a majority of the notaries and “claim” that the block is available when it isn’t.


But how much is it different from Casper ? :smiling_imp::smiling_imp: It looks like Casper makes exactly the same assumptions )


Well, yes, but I’d argue that Plasma is only Plasma exactly because it doesn’t make those assumptions.


Well )) What I am saying is if reliance on a validators is OK for Casper why not to consider this (one can name it differently for the sake of purity )

Imho it seems that a system with burn proofs has lots of advantages in preventing users from doing bad things. Users arguably will try doing bad things way more frequently than plasma operators.

With burn proofs there is a potential problem of the Plasma operator witholding blocks/burn proofs, but it seems that a mechanism where a user can complain and force the operator to publish the block to a set of validators is an interesting alternative to explore …


Will you elaborate on this please?


The family of protocols where you rely on a randomly sampled set of bonded validators to guarantee data availability and/or validity of a separate chain is generally called “sharding.” Such designs introduce many complications and assumptions that Plasma does not have, and provide many benefits that would make most of the mechanisms in Plasma irrelevant.

Most notably, Plasma should be possible even if there is only a single operator for that chain (i.e. an exchange like Coinbase, or an app developer like Cryptokitties).

It’s definitely been explored, but the current thinking is that any challenge-response protocol around data availability is subject to problems around speaker/listener fault equivalence ( Try designing such a protocol that is immune to griefing attacks and you will see what we mean.


I don’t think this is correct. While I guess you are taking a subsection of participants and asking them to do certain work, the work is done for the entire set of participants. Sharding is about separating information so that work can be done in across each shard separately.

This seems to be geared towards block producers.

A random group asked to confirm block availability does not need to also be able to produce blocks. The exact responsibility of the group would be to download blocks and collectively prove that as many other members of the chain have seen the block as possible.

A griefing factor can be adjusted and weighted by users who have recently included transactions in the plasma chain. This means that the group of active users who are not censored have seen the data.

Still to be made explicit is the exact cost of data unavailability. The operator could progressively lose a bond while users pay a sort of indirectly and partially refundable (by availability proof) fees for block inclusion. This means their uncensored users can collectively grief them. To me this seems totally acceptable for a plasma chain. The operator controls this set of users and the censored users can exit.

This has little to do with the consensus mechanism of the Plasma chain (i.e. it can still use POA), but more to do with a game that proves data availability. Which will be extremely important in non-UTXO version of plasma.


Stuck at sendTx function, used example as it is explained in the github document:


The point that is unclear to me what is the end goal of Plasma development.

Theoretically, one needs to produce a specification accepted by everyone. For this one needs to decide on the committee preferably by independent people. Otherwise, everyone can do whatever she perceives as correct. As an example, OmiseGo claims to develop a Plasma implementation. No one here can attest the security of this implementation. May be OmiseGo guys are supersmart and supersecure. By since there is no formal spec and no process no produce a formal spec other than @vbuterin approving it as secure at some point (as we remember the sharding spec and Casper were first approved and then disapproved), the entire discussion on this message board seems to have no purpose. Taking one person’s opinion however smart this person is is a bad way to produce security protocols. There are zillions of examples of problems that this creates starting from SSL v 2.0 going WEP, WPA etc. Ethereum foundation needs to grow up and mature otherwise there will be a high profile security breach at some point, which will lead to lots of embarrassment or a fork where some people will create an Ethereum clone with a formal security process. Lightning Network is a good example of how not to do things. It is a centralized network designed in a proprietary way and totally still born. No one in the world knows how Lightning Network works. Plasma so far follows the path of Lightning pretty closely.

The right way to design Plasma would be to first specify a security model (there is a Common Criteria standard for it btw), then discuss threats, then threat mitigation, then agree or disagree on the spec. Otherwise the security model is unknown, the threats are not specified or listed anywhere, and what is designed is totally unknown. The threat of a bad plasma operator is mitigated by people altruistically doing things, this idea alone has never worked in the real life, may be it works may be not, there has not been much of discussion of this part.

Then there are emotional discussions on Twitter with no logical arguments brought on any side as to what is secure and what is not secure. BTW there is no absolute security of anything security or insecurity of something depends on the security model chosen and threats mitigated.

As we remember, Solidity was designed with security problems like integer overflow and recurrent behavior that no-one understood.
It is understandable though, since at that time Ethereum was essentially a startup. Nowadays, it seems like since development slowed down anyway, why not to introduce a more formal spec process that everyone will understand? It seems this will benefit everyone, including private companies around Ethereum.


When the implementation is done, the security of it can be evaluated by reading the smart contracts and client code.

Before the implementation is available, one can read informal descriptions about the contract design to evaluate the design. The extent to which this is sufficient is subjective, of course, but personally there is plenty of detail available for me to understand the design to the extent that I don’t expect to be surprised by anything I didn’t think about if/when it goes on mainnet.

False dichotomy. One can have useful discussion about ideas without requiring formal specification.

Also, I certainly didn’t trust Casper just because “Vitalik approved it”. I read the Casper paper, informally verified the proof, read the smart contract linked in the Casper EIP, and informally checked if it corresponded to the paper.

No one in the world knows how Lightning Network works.

The Lightning smart contracts are available at for everyone to read, and they even include very helpful descriptions of what they smart contracts try to do. I’ve read it, and encourage you to if you’re interested in Layer 2 on Bitcoin.

I don’t see how this follows at all. You can figure out an implied threat model by understanding the design.

BTW there is no absolute security of anything security or insecurity of something depends on the security model chosen and threats mitigated.

I agree with this, but this seems to undermine your proposed development model. In practice protocol development (IMHO) occurs by people designing the protocol and the security/threat models together, which makes it hard to design one without taking into consideration the other. There’s still no cross-blockchain-community consensus on very basic choices to be made at the layer one security/threat model (see: selfish mining, fee-stealing attacks in a 0-inflation world, verifier’s dilemma, dPoS, weak subjectivity, post-quantum security, 0-conf (amazingly enough), griefing in Casper-FFG, the staking/slashing metagame in PoS). On layer 2, people will probably disagree on how to evaluate griefing and collective action problems (like in MVP).

I think there are absolutely some suggestions in this post about process that I agree with. I personally would like more precise (not necessarily formal) specifications and proofs, as well as more emphasis on the security/threat model. There also seems to be no consensus around the necessity of formal specification as well as formal verification (the FFG paper was, to some extent, and the contract was slated to undergo it, but most dapps today don’t formally verify stuff before launch, and some write rather imprecise specs). But I think most of this is just personal preference, and as long as we seek clarification, welcome/address good-faith criticism, read code and think for ourselves, run independent audits, and don’t rush for mainnet launches too much, it shouldn’t be necessary to drastically change the development process.


Hi, may I ask a maybe stupid question?

I can’t get the point how Minimal Viable Plasma can help to improve the network scalability if we need to wait for a transaction confirmed in a block to send it? Or the scalability is not a purpose in MVP phase?

User Behavior
The process for sending a Plasma coin to someone else is as follows:

  • Ask them for their address.
  • Send a transaction that sends some of your UTXOs to their address.
  • Wait for it to get confirmed in a block.
  • Send them a confirm message, signed with the keys that you use for each of your UTXO inputs.


There are a lot of components to scalability. MVP primarily improves throughput (roughly speaking, the number of transactions that can be finalized every N seconds), rather than latency (the amount of time before a given transaction is confirmed).

It improves throughput because the transaction only gets included in a Plasma chain block, and only the root hash of that block needs to be published to the main Ethereum chain.