Minimal Viable Plasma

You could also play a game where random participants are asked and incentivized to confirm block availability.

The participants selected for the n+1 block are chosen at the nth block. Before the n+1 block is accepted it must be signed or be submitted with signed proofs of specific utxo’s being included in the block by the participants selected in the nth block.

1 Like

The problem with a design like this is that it break Plasma guarantees by making the protocol dependent on a secondary consensus mechanism. In this example, it’s possible for the operator be (or collude with) a majority of the notaries and “claim” that the block is available when it isn’t.

But how much is it different from Casper ? :smiling_imp::smiling_imp: It looks like Casper makes exactly the same assumptions )

Well, yes, but I’d argue that Plasma is only Plasma exactly because it doesn’t make those assumptions.

1 Like

Well )) What I am saying is if reliance on a validators is OK for Casper why not to consider this (one can name it differently for the sake of purity )

Imho it seems that a system with burn proofs has lots of advantages in preventing users from doing bad things. Users arguably will try doing bad things way more frequently than plasma operators.

With burn proofs there is a potential problem of the Plasma operator witholding blocks/burn proofs, but it seems that a mechanism where a user can complain and force the operator to publish the block to a set of validators is an interesting alternative to explore …

Will you elaborate on this please?

The family of protocols where you rely on a randomly sampled set of bonded validators to guarantee data availability and/or validity of a separate chain is generally called “sharding.” Such designs introduce many complications and assumptions that Plasma does not have, and provide many benefits that would make most of the mechanisms in Plasma irrelevant.

Most notably, Plasma should be possible even if there is only a single operator for that chain (i.e. an exchange like Coinbase, or an app developer like Cryptokitties).

It’s definitely been explored, but the current thinking is that any challenge-response protocol around data availability is subject to problems around speaker/listener fault equivalence (https://vitalik.ca/general/2017/07/16/triangle_of_harm.html). Try designing such a protocol that is immune to griefing attacks and you will see what we mean.

2 Likes

I don’t think this is correct. While I guess you are taking a subsection of participants and asking them to do certain work, the work is done for the entire set of participants. Sharding is about separating information so that work can be done in across each shard separately.

This seems to be geared towards block producers.

A random group asked to confirm block availability does not need to also be able to produce blocks. The exact responsibility of the group would be to download blocks and collectively prove that as many other members of the chain have seen the block as possible.

A griefing factor can be adjusted and weighted by users who have recently included transactions in the plasma chain. This means that the group of active users who are not censored have seen the data.

Still to be made explicit is the exact cost of data unavailability. The operator could progressively lose a bond while users pay a sort of indirectly and partially refundable (by availability proof) fees for block inclusion. This means their uncensored users can collectively grief them. To me this seems totally acceptable for a plasma chain. The operator controls this set of users and the censored users can exit.

This has little to do with the consensus mechanism of the Plasma chain (i.e. it can still use POA), but more to do with a game that proves data availability. Which will be extremely important in non-UTXO version of plasma.

The point that is unclear to me what is the end goal of Plasma development.

Theoretically, one needs to produce a specification accepted by everyone. For this one needs to decide on the committee preferably by independent people. Otherwise, everyone can do whatever she perceives as correct. As an example, OmiseGo claims to develop a Plasma implementation. No one here can attest the security of this implementation. May be OmiseGo guys are supersmart and supersecure. By since there is no formal spec and no process no produce a formal spec other than @vbuterin approving it as secure at some point (as we remember the sharding spec and Casper were first approved and then disapproved), the entire discussion on this message board seems to have no purpose. Taking one person’s opinion however smart this person is is a bad way to produce security protocols. There are zillions of examples of problems that this creates starting from SSL v 2.0 going WEP, WPA etc. Ethereum foundation needs to grow up and mature otherwise there will be a high profile security breach at some point, which will lead to lots of embarrassment or a fork where some people will create an Ethereum clone with a formal security process. Lightning Network is a good example of how not to do things. It is a centralized network designed in a proprietary way and totally still born. No one in the world knows how Lightning Network works. Plasma so far follows the path of Lightning pretty closely.

The right way to design Plasma would be to first specify a security model (there is a Common Criteria standard for it btw), then discuss threats, then threat mitigation, then agree or disagree on the spec. Otherwise the security model is unknown, the threats are not specified or listed anywhere, and what is designed is totally unknown. The threat of a bad plasma operator is mitigated by people altruistically doing things, this idea alone has never worked in the real life, may be it works may be not, there has not been much of discussion of this part.

Then there are emotional discussions on Twitter with no logical arguments brought on any side as to what is secure and what is not secure. BTW there is no absolute security of anything security or insecurity of something depends on the security model chosen and threats mitigated.

As we remember, Solidity was designed with security problems like integer overflow and recurrent behavior that no-one understood.
It is understandable though, since at that time Ethereum was essentially a startup. Nowadays, it seems like since development slowed down anyway, why not to introduce a more formal spec process that everyone will understand? It seems this will benefit everyone, including private companies around Ethereum.

4 Likes

When the implementation is done, the security of it can be evaluated by reading the smart contracts and client code.

Before the implementation is available, one can read informal descriptions about the contract design to evaluate the design. The extent to which this is sufficient is subjective, of course, but personally there is plenty of detail available for me to understand the design to the extent that I don’t expect to be surprised by anything I didn’t think about if/when it goes on mainnet.

False dichotomy. One can have useful discussion about ideas without requiring formal specification.

Also, I certainly didn’t trust Casper just because “Vitalik approved it”. I read the Casper paper, informally verified the proof, read the smart contract linked in the Casper EIP, and informally checked if it corresponded to the paper.

No one in the world knows how Lightning Network works.

The Lightning smart contracts are available at https://github.com/lightningnetwork/lightning-rfc for everyone to read, and they even include very helpful descriptions of what they smart contracts try to do. I’ve read it, and encourage you to if you’re interested in Layer 2 on Bitcoin.

I don’t see how this follows at all. You can figure out an implied threat model by understanding the design.

BTW there is no absolute security of anything security or insecurity of something depends on the security model chosen and threats mitigated.

I agree with this, but this seems to undermine your proposed development model. In practice protocol development (IMHO) occurs by people designing the protocol and the security/threat models together, which makes it hard to design one without taking into consideration the other. There’s still no cross-blockchain-community consensus on very basic choices to be made at the layer one security/threat model (see: selfish mining, fee-stealing attacks in a 0-inflation world, verifier’s dilemma, dPoS, weak subjectivity, post-quantum security, 0-conf (amazingly enough), griefing in Casper-FFG, the staking/slashing metagame in PoS). On layer 2, people will probably disagree on how to evaluate griefing and collective action problems (like in MVP).

I think there are absolutely some suggestions in this post about process that I agree with. I personally would like more precise (not necessarily formal) specifications and proofs, as well as more emphasis on the security/threat model. There also seems to be no consensus around the necessity of formal specification as well as formal verification (the FFG paper was, to some extent, and the contract was slated to undergo it, but most dapps today don’t formally verify stuff before launch, and some write rather imprecise specs). But I think most of this is just personal preference, and as long as we seek clarification, welcome/address good-faith criticism, read code and think for ourselves, run independent audits, and don’t rush for mainnet launches too much, it shouldn’t be necessary to drastically change the development process.

3 Likes

Hi, may I ask a maybe stupid question?

I can’t get the point how Minimal Viable Plasma can help to improve the network scalability if we need to wait for a transaction confirmed in a block to send it? Or the scalability is not a purpose in MVP phase?

User Behavior
The process for sending a Plasma coin to someone else is as follows:

  • Ask them for their address.
  • Send a transaction that sends some of your UTXOs to their address.
  • Wait for it to get confirmed in a block.
  • Send them a confirm message, signed with the keys that you use for each of your UTXO inputs.

There are a lot of components to scalability. MVP primarily improves throughput (roughly speaking, the number of transactions that can be finalized every N seconds), rather than latency (the amount of time before a given transaction is confirmed).

It improves throughput because the transaction only gets included in a Plasma chain block, and only the root hash of that block needs to be published to the main Ethereum chain.

5 Likes

We’ve implemented Plasma MVP in Vyper with @nrryuya

And, just published the blog post about the Implementation.

8 Likes

Very late but just wanted to stress that (at least for now) Plasma achieves this by putting the burden on the end users instead. :frowning: I really like Plasma and I think it has potential, but this fact is simply not being mentioned enough, although it should be… The main focus of researchers/designers of any IT/tech system should always be to relieve the burden of the end users (because by default they have less resources and are used to be “spoiled” by good UX), and than even of the business owners (operators in our case) if possible.

I’m writing all of this in hope that Plasma research community will eventually start thinking in this direction. :crossed_fingers:

I think it’s worth specifying what a “user” is more precisely, specifically who is being burdened. Most people will run the default client software and making sure that software keeps their money secure is absolutely part of a working plasma implementation, and that software will be pretty complicated (IMO more complicated than for e.g. LND), since the client rules for the plasma specs aren’t straightforward, and you have to deal with normal SWE stuff like what if the user power cycles, uninstalls their app, etc and make sure they don’t interfere with the security of their funds.

I’m talking primarily about end users, e.g. traders on a trading platform that sits on a Plasma chain, or cat collectors/breeders on a CryptoKitties-like Plasma chain.

Completely agree. I believe that should be obvious by now, and we’ve barely scratched the surface with smart contracts on Plasma and other complex stuff.

Having the above in mind, we can make the following conclusion: If anyone wants to own a kitty or hold some money on a trading platform, or hold any value on any Plasma chain in general, they need to have a dedicated machine that will constantly be online, checking TWO blockchains (every block on the main chain looking for exits and every block on the Plasma chain looking for invalid or withholded blocks). And if we imagine a future where I hold some value in e.g. 5 different Plasma chains (which I think is completely realistic) things get pretty ugly. :see_no_evil:

This is a huge step backwards in UX compared to both centralized services and “original” blockchains like Bitcoin and Ethereum.

IMHO, no wide adoption will happen if the community doesn’t accept this as a fact and work on it.

1 Like

True i think same way with sharding too, like create multiple shards/plasma(visualise it as ring of nodes) and then each shard/plasma will be connected with multiple two-way pegs(which will transfer token between those shards/plasma obviously). i know this is just idea we need to do some research but this seems neat to me.

Im trying to understand the meaning of this. Am I right in thinking that the priority is either
blknum * 1000000000 + txindex * 10000 + oindex or blknumFromOldBlock * 1000000000 + txindex * 10000 + oindex

Now this could mean that if txindex and oindex are the same for this block and the old block, then they would have the same priority? This could mean that if you save exits in a mapping by priority, then it would overwrite the ealier exit?

1 Like

If all clients are expected to monitor the validity of the plasma blocks at all times and report bad behavior incentivized by exit deposits in cases where exits were successfully challenged, wouldn’t that mean plasma can’t properly scale if clients have the ability to automatically detect frauds? In other words, if there’s 10000 people on a plasma chain with 5000 online, and one attempts an invalid exit, wouldn’t 4999 then notice this and submit challenges at the same time, thereby gut-punching the network with 4999 simultaneous transactions? Worse yet, isn’t there an obvious attack vector there wherein someone can enter with 0.001 eth, try to exit with 1 eth, and thereby constantly grief not only the various plasma chains in existence but also the main chain by triggering plasma clients around the world into noticing obviously invalid exits?

Sorry if this was discussed in the topic, missed it if so.

2 Likes

There is a difference that makes the mass exit threat in plasma MVP worse (I think this may be clearer in retrospect): for a centralized channel hub, the malicious/ hacked hub operator can force each user to exit onto the main chain, but it requires the hub to broadcast a transaction for each user (or more specifically, each channel). Whereas in plasma MVP, just one transaction from the operator is enough to force all users to exit.

1 Like