Ethereum 2.0 Data Model: Actors and Assets


let me start first with what we want to achieve for the end-user to get mass-adoption:

  1. being able to create for free an immutable account

  2. being able to receive for free assets linked to an account

  3. being able to send assets linked to an account and pay potentially some fees during the transfer

  4. being able optionnaly to recover ownership of an account even if the user lost its original private key by trusting some friends or some KYCs providers or both (mimic the current forget password feature available on 99.99% of websites)

  5. being able to manage the security of an account and for example switch from 1 sig to multi-sig to authorize transfer of your assets.

In the current environnement, you can achieve 1,2,3 with Bitcoin or Ethereum easily. You generate a public/private key for free and you can start receiving or sending some assets.

If you want to achieve 4,5, you will need to delegate the security approval of an asset transfer to a smart actor contract.

In the current environnement, address of smart contract are necessarily different of the first original public address a user create freely offline. Which means that at some point, for a user to get additionnal property such as 4) and 5), he will need to move its assets into its new address and also tell all his friends that his address to receive assets has changed. Of course it is doable, but if you really care and think deeply about mass-adoption, a normal user will never do that, it is way to much complicated. You want to provide 1,2,3,4,5 without the need to change addresses for a user.

Another solution is to use ENS to achieve address redirection but i disagree with this approach because it adds another layer of complexity and cost not necessary to achieve account immutability.

From what i understand, the current limitation of the ethereum protocol could be easily change: we could deploy a smart-contract to an empty specific ethereum address if the user can prove that he owns the private key of this specific address.

I believe this is the right way to improve our goal to have an account immutability which will get the property 1,2,3,4,5

I hope it helps

Kind regards,



In Constantinople, there’s CREATE2 opcode, which allows to deploy contract to a deterministic address (which can be linked to user’s address). Then, the contract itself would verify if its address is derived from user’s address, and authorise an action. Is this not enough?


Unfortunately, I don’t think so.

Lets take a quick user story example. Antoine create a set of public/private key. He got a new address for free : 0xAAA123456. Yay!

He is very happy because now he can receive tokens! He give this address to all his friends and he starts to receives 12 ETH, 12 REP, he is getting rich now!

Now, he is worry about the security of its account and would like to enable a private key recovery mode. He would like that if he lost his original private key, he could get a new one for its account if his friend Marta, Aleksey and Bob submit a Shamir Secret for him.

Of course, Antoine doesnt’ want to move assets or change its beautifull 0xAAA123456.

This user-story is not possible in the current ethereum protocol because Antoine cannot deploy a smart user contract on 0xAAA123456.

i hope it helps!


If 0xAAA123456 is a public key-derived address, then Marta, Aleksey and Bob can submit the shamir secrets to help Antoine recover the private key, and he will be able to keep using his REP as normal.

I don’t see the issue.


So you mean i can receive and send ETH on 0xAAA123456 which is at the beginning an empty account address and then later deploying a smart contract on 0xAAA123456 ? If we can do that, that’s all we need.


Yes, it is currently possible - when you deploy a contract at an address which contains a non-zero balance, this balance will be added to the new contract’s endowment


What we ended up doing at Skale for all of our Solidity contracts is splitting each contract into state-full data contract and state-less behavior contract as people suggested here.

A data contract should only have state plus getters and setters. A behavior contract should be stateless … Ideally one would have it as a keyword in Solidity …


The rest of this thread is interesting, and I support the terminology of “actor” instead of “contract”, but this proposal looks terribly complicated and rigid. Ethereum found success because it provided a totally open environment for people to program their own abstractions. There is little overhead of trying to fit your application into some preconceived notion of what someone thought you should build. This is what made Ethereum appeal to so many more people than older systems like colored coins and Master Protocol etc.

This proposal seems to be a big step backwards. Now I’ve got to understand your categories, and figure out how to make my app fit into them. I’ve got to learn a whole framework instead of writing some turing complete code to implement whatever logic and storage my app needs. If I want to have logic in a transfer, now I’ve got to do some ceremony with a predicate.

My guess is that if this proposal went forward, people would keep coming in with things that are difficult to implement using it. Exceptions and extensions would keep being added to it to support different use cases. This proposal reads like something that can only grow in complexity. To become fully general it would have to contain every possible use case within it, becoming infinitely complex.

To test my theory:

  1. How would you do in this system?
  2. How would you do this:
  3. How would you do this:


I won’t disagree that my spec wasn’t very well written, but I don’t think it precludes the use cases you brought up:

1644: Issuer’s transfer rules allows reclamation by designated 3rd party (i.e. caller doesn’t have to be msg.sender)
AZTEC: not deeply familiar with the protocol, but from I know the smart contract basically acts as a mixer so nothing really changes?
865: Ether is treated like an ERC20 (or potentially as a semi-fungible token), so I would imagine the same methodology would perhaps be easier to leverage.

I think a better counter-argument for the “framework” would be a use case where there is no clear stateful asset being interacted with (including the contract itself as an “actor” that is owned), or that can’t be accounted for with the 4 proposed types of assets.

I tried to make it general enough where it could encompass the vast amount of use cases I have come across, but specific enough where the programming model of how to utilize them is clear.


This is exactly why I personally support having something like A minimal state execution proposal as the base layer. This actor/asset model, along with other approaches, can fairly easily be built as layer 2 systems on top.


For 1st layer: TxVM + IvyLang would be very good beginning point I think.
For 2nd layer: I would suggest an UTXO Contract generative DSL in this deck


Agree with that. I think the test of the base layer should be whether @fubuloubu could build his system on top of it. I think that it’s a system that would appeal to many people, but I personally wouldn’t want to be forced into it.


Well, I wouldn’t say the point of my system would be to force anyone to do anything. It would work the same as it does today, but the additional features would give assets first class citizenship in the platform. A more strict data model leads for better reasoning and optimizations on both the design of applications and the protocol layer.

People are still free to design arbitrarily complex contracts with state storage and access, but this would make it easier to adopt better standards of interaction and gives tools for the whole state rent storage situation.

I think if you identify very clear use cases of the platform, more specificity is actually a welcome and efficient experience from all perspectives. The alternative is a jumble of ERCs for token standards with different properties that can’t be abstracted.


I think the ERC process has led to some great work, and I’m really glad that our core primitives such as ERC20 weren’t all hacked together back when the EF was running out of money and struggling to launch the main chain on time.


Absolutely! It has led to so much innovation and exploration of the design space.

It has also led to an entrenchment of standards, ERC20 is the best example of this. Despite several passes at improving it, it will continue to be the most widely used because network effects. I don’t see that ever changing with new ERC proposals. About the only thing that could overcome it is a better facility solidfied by the core protocol. We should take what we’ve learned and make it easier and safer to use!


I think Vitalik might be right on this one. @fubuloubu remember we briefly spoke about Dfinity’s Primea, do you think something similar (communication “middleware” which includes the actor model, optional synchrony etc) could also be the optimal solution here?


Sure, that is definitely one approach. Layer 2 systems could definitely use this model. I would argue many already do (as well as in layer 1).

The point of my original post was that it came from a place of general analysis of what has been built over 3 years, and what has seen widespread usage. The idea is to add new features and new data structures (which does not affect the current design of contracts at all, so it is entirely opt-in) that helps contract designers more easily program with the primatives they already want to use.

How much percentage of the network, dapp design, layer 2 architectures, etc. are asset transfers of some type? How much of what we design has to do with ownership in some way, shape or form?

If well adopted, this framework could allow the design of state rent control and sharded communication architectures in a more direct way, as the data model would be more specific to the types of state and communication that is being used, allowing better optimization to those use cases.

This could be migrated in a smoother and more gradual process, and does not require conducting new research or validation of new algorithms to be successful. It just takes an eye in optimizing for the 90% use case, which reduces the burden on the network. General purpose computing is still possible, I just don’t think it makes sense to optimize for it because that’s not how the system is being used. Optimize for what is!


Would also like to note this is optimization for inter-contract communication. Stateful asset passing is something all contracts already do.

The design inside a contract still can be whatever and however you would like to design it. Solidity/Vyper/etc still work.

It’s the same reason we added precompiles for common cryptographic operations instead of making every user/language implement their own. It just seemed to be what people used most. :man_shrugging:


Totally agree with this.


For posterity’s sake:

I dare someone to write an asynchronous sharding model PoC using that and websockets lol