This implies that data within a contract can be selectively removed and revived, rather than the entire contract removed/revived. Which can be decomposed into Cross-Contract storage, or schemes with CREATE2 owner’s contracts that Vitalik mentioned above
This is a great idea, thanks for this contribution - I have been thinking about it today, and it seems to me that this could be a suitable alternative to LCCS, provided that we solve eviction notification, which is not too difficult.
I am going to try three things next:
- Implement an example of ERC-20 contract and a corresponding token holder contract (which could be made to support multiple tokens, I think), and see if it works.
- Use heuristic and automation to gather more accurate data (I had to use manual inspection before) about the share of ERC-20 tokens in the current state
- If (1) works out, I will look into comparative efficiency of this scheme and LCCS, and modify State rent proposal accordingly, adding modification to the eviction notification.
You wouldn’t. The contract would quickly get evicted, and if you later wanted to spend it, you could recover it and spend the REP by proving that the contract previously existed.
This just made me realise, that at least for token contracts, eviction notifications might not be as important as I thought - since the evicted holders can be brought back, technically, those tokens are still part of the supply - so the totalSupply does not need to be reduced when the token holders get evicted. This could make things even simpler. Thank you for that thought!
I would be very interest to see this implementation!
Thanks for starting this!
May I highlight that the
.eth Registrar (for forward name->address resolution) is just one piece of the ENS system.
There is the Registry, which is currently much more susceptible to griefing, as “higher-level domains” are “free”. (Quotes here, since the Registry is actually oblivious of the concept of names/domains, operating on hashes only.)
This is proportional in the Public Resolver, probably the most popular resolver (as it’s “free for use of public” and requires no personalised deployment); and the Reverse Registrar/Resolver dual-purpose contract - perhaps to a lesser extent, as there’s only one entry possible per account address, instead of an entry for any conceivable name (as in the case of the “forward” resolver).
Then there are the Deed contracts, a copy for every auction entry; but their impact on storage bloat (and rent) is proportional to that of the
.eth Registrar, which you’ve mentioned.
(Sorry for not linking all these mentioned contracts just yet - in a bit of a hurry. Ping to remind me to do this.)
See @nickjohnson’s reply with actual links.
I made an implementation of an ERC20 token that stores its state in separate contracts over the weekend. I’m not sure if it works - the tests aren’t using Constantinople. It’s at https://github.com/jvluso/openzeppelin-solidity/blob/c683a5bca151f1e10b6ff9dd247b575c9914415a/contracts/token/ERC20/ERC20.sol .
Thank you for that! I am trying to do the same, and I will definitely use some of the ideas from your code. I am going to be doing Constantinople uint tests too. Will post here in couple of days when I have the first version
So I have some data on ERC20 tokens. I used successful invocation of “transfer” function to detect ERC20 contracts. By successful I mean either returning zero-length output (earlier version of the standard), or non-zero output 32 bytes long.
At block 6813760, which is 2nd of December, there were 149’746’097 storage items in total, spilt across 7’014’024 contracts. Note that there are also zero-storage contracts (like the ones created by GasToken2), these are not counted into this number. In fact, 4’619’309 contracts have a single item in their storage.
ERC20 heuristics identified 71139 ERC20 contracts. That includes CryptoKitties_Core, because it is both ERC20 and ERC721. And these 71139 contracts collectively occupy 80’504’952, which is about 53.7%
It would be easier to identify all ERC721 contracts, because the standard demands that Transfer event is to be issued on any transfers. But the strategy for further data analysis is to remove all ERC20 contracts from the dataset and see what seems to be the largest category now.
I have also started on my implementation of ERC20. Have not tested anything yet, but the general idea is to try to have a holder contract being able to hold arbitrary number of tokens, instead of having one holder contract per (token, owner) pair. I might also do data analysis on what is current number of these pairs.
Code is here, but tests will come later: https://github.com/ledgerwatch/eth_state/tree/master/erc20
FYI, the linked contract is the .eth registrar, which hands out new domains under .eth; ENS itself is here, and its source is here). (deployed LLL version here). The same mitigation would apply, although it could be difficult to eliminate all possible storage of data on behalf of others when it comes to name issuance etc.
Another example of this issue is the DNSSEC Oracle, which allows anyone to submit a proof of existence of a DNS record.
I have written tests for token minting and transfer, and they pass. I did not test one holder having multiple tokens yet, but will add it in couple of days. Working with CREATE2 without extra tooling is a bit painful, of course. Here is the working token contract: https://github.com/ledgerwatch/eth_state/tree/master/erc20
@antoineherzog you wanted to see this
It looks very good
i can see how we can improve the smart actor contract now to delegate any transfer authorization from the ERC20 contract to the smart actor contract by calling a isTransferAuthorized method. Then we could add very cool feature such as: improving the security of an account by switching from 1 sig to multisig or implementing a shamir secret procedure for private key recovery I am publishing an article about that tomrorow.
More data on ERC20 tokens (at block 6856437, which was 9th of December 2018).
From my previous data, containing 71k+ ERC20 token contracts, I filtered out non-contracts, self-destructed contracts, those which do not have working balanceOf() method, and those that are not storing token balances in a straightforward way as Solidity mapping, and those that do not have any holders. 59986 contract left from such filtering.
(token, holder) pairs: 55’494’243
Unique holders: 18’628’814
Top 40 tokens by number of holders:
Top 25 holder by number of tokens:
imho the root of the problem is that non-mining full nodes in ETH are not paid anything. That is why people are not able to buy enough storage and compute.
There are hundreds of millions of dollars per year paid to miners.
imho, one needs to figure out a way to pay a portion of this money to full nodes, and then each of them will be able to handle 1000 times more storage. The current ETH storage is tiny by enterprise means.
If ETH 2.0 is going to have 100 chains, these chains can not be stored altruistically. Therefore, the problem of paying nodes for storage needs to be solved anyway. If this problem is solved, storage rent will simply not be needed in my view.
You are partially right. Partially, because it is not just storing the state that matters, but also downloading it in a “trustless” way when you join the network. This problem of downloading needs to be addressed regardless whether we want to reduce the state growth or not, and I might have something to show on this front in a few months (with my new Morus database that I am developing in collaboration with Ethermint).
If someone figures out how to pay full node, that would be great! I do not see that anyone has done it yet, therefore it is not wise to just abandon the State Rent work. Although I am pretty neutral on whether it has to be introduced or not, I will continue working on it. If we keep giving things up as soon as someone brings up an alternative (but does not actually implements the alternative), nothing non-trivial will be done.
Ok - here is one interesting idea to pay to a full node by introducing the “PoW fuel” abstraction explained below:
Introduce fuel costs for EVM read calls. Basically a node that responds to a JSON read call will require PoW from the client, proportional to the gas spent in the read call. Currently read calls are free, what is proposed that you charge for them as for write calls.
Let the node post the gathered “fuel” from time to time to a bounty smart contract on the main net to get ETH printed in return.
Thats it. The nodes will respond to read calls, gather “fuel” and from time to time go exchange this fuel for real ETH.
Clients would not need to do the PoW themselves, they could pre-pay to third party providers. In additional, to be on the greenish side, one can consider using VDF functions(?) instead of PoW.
In this case how would you prevent double spending of the PoW? Especially if you want to batch PoW submission, there needs to be an efficient way to validate whether the PoW had been submitted before.
There’s also a fair exchange problem. A user has a piece of PoW and the full node has the solution to their query, but they must be able to perform the exchange without one having the ability to cheat the other.
Finally, what’s to stop mining of PoW in the system to extract value without actually filling queries?
Harry - totally - these are the hard questions that need to be addressed … )
I think if someone mines the PoW just to make value it is not a big problem since it just becomes another crypto currency in a certain sense. One can map PoW to ETH and pay people in ETH, or you one have a “fuel” crypto currency (ERC-20 token) so people asking nodes for read calls for would pay nodes using this token. If someone else mines the token by directly exchanging PoW for the token it will be totally fine.
I think the PoW should include the node ID / public key in some way so others can not claim it.
There “comb inability” property is the hardest one. One should be able to somehow combine many “small” PoW into one succinct PoW proof. May be one can use STARKs for that.
Another possibility it may be use g^2 group squaring algorithm similar to VDF - people could keep up squaring a group element, so each call could correspond go further squaring …