[RFC] [DRAFT] Anoma as the universal intent machine for Ethereum

When a user specifies an arbitrary intent (taking your example of “I want to swap USDC for ETH, and I’m happy to receive ETH on either the main chain, Optimism, or zkSync”), how is the correctness of the intent guaranteed (i.e. that it is actually executed according to the user’s specification)? Is that based on a consensus layer by the solvers?

2 Likes

When a user specifies an arbitrary intent (taking your example of “I want to swap USDC for ETH, and I’m happy to receive ETH on either the main chain, Optimism, or zkSync”), how is the correctness of the intent guaranteed (i.e. that it is actually executed according to the user’s specification)? Is that based on a consensus layer by the solvers?

In Anoma, intents are basically split into two parts:

  • Constraints, e.g. “I want at least 5 ETH for my 10 USDC”, which can be enforced by the settlement location (so users do not need to trust the solvers, or any kind of solver consensus, for these). The example intent which you mention has only constraints, so those would all be enforceable by the settlement location.
  • Preferences, e.g. “I want as much ETH as possible”, which cannot be enforced directly by the settlement location, because the settlement location doesn’t know how much ETH was possible. whichever solvers (or solver DAOs) the user sends their intents to have control over whether preferences are respected (and to what degree), but users (and user DAOs, or user groups) have control over where they send their intents, so they can switch over time to solvers and solver DAOs which best respect their preferences.
2 Likes

I appreciate the effort and research that has gone into this, but I feel like the real work hasn’t even begun. For one, writing safe efficient and usable smart contracts to integrate and enable this vision will take a lot of effort and may ultimately be too expensive for users to even want to use. There’s also the huge open question of how to design a “commitment language”. These can all be solved over time, but intents are already live in production today (albeit in an early form) with user and developer habits and processes already starting to solidify.

This feels like a very top-down endgame approach which I fear is doomed to fail in a live decentralized environment where developers are already building and rolling out products. I would love to see a bottom-up plan for how this vision can actually be achieved and pivot (if necessary) as the intent landscape evolves. Show me how UniswapX and CoWSwap can gradually pivot over time to using a singular Anoma protocol without increasing any burden on the user.

2 Likes

Thanks! Appreciate the critique, and I’m personally still investigating what bottom-up deployment plan makes most sense. I’m not sure I would consider habits solidified yet - crypto is still early - but I agree with you that a gradual transition path is much more feasible than an all-at-once protocol switch. For example, one initial order of operations could be the following:

  1. Node sidecar for the P2P intent network, using existing EVM intent formats (perhaps protocols like what Essential is working on?) and the EVM. At this point, the P2P network acts as a kind of distributed “intent router” which can unify liquidity across various Ethereum DEXs. Frontend developers who want to use this distributed intent router would need to make changes, but no new smart contracts are required, and user-facing experiences would not change very much - still the same “token swapping” UI, for example, just with more complex distributed routing on the backend.
  2. Adventurous developers can build new applications on the resource machine - particularly those which benefit uniquely from information flow control and heterogeneous trust - while developers of applications who do not need these features do not need to make any changes. Both classes of applications would gain from the interoperability and potential extra liquidity. Users of new applications would have to learn them, but users of existing applications would not need to change anything.
  3. Collect feedback and see what to do next :slight_smile:

Generally I’m in favor of “grow-the-pie” strategies which would help bring new and different applications into the Ethereum ecosystem, and we plan to focus our efforts here (as opposed to convincing existing applications to switch).

4 Likes

Thanks for taking the time to write this up, definitely helped me understand Anoma a bit better and how it’s supposed to fit into the existing crypto eco

Few Questions:

  1. Regarding preference based intents
    Say a user sends a “here’s 100 USDC I want as much ETH as possible” intent. And they want it to be solved using a batch auction (a-la-cowswap). What does that look like with Anoma? Do they express that they trust specific batch auction contract addresses to provide a solution then solvers spend solutions to those contracts? Or do they have to trust some solver DAO to run an off-chain batch auction?

  2. Regarding the protocol adapter proposal
    Do you envision making the intent entry point callback based to enable composability with normal EVM state? Or is this exclusively for executing anoma specific state on-EVM without external composability?

3 Likes

Thanks for the feedback @markus_0!

Very close to the former possibility you imagine - users would specify in their intent the batch-fairness property that they want - for example, that the intent is ordered, available for a certain period of time to solvers, and then the best solution (or solution satisfying some fairness criterion) is picked - and this property is enforced by the settlement location. Different users may pick different properties and make different choices around auction parameters, solver permissioning, etc. - what properties they pick affect which auctions are composable, of course, so there may be some reasons to agree / use reasonable defaults.

We aim to support atomic composability with normal EVM state - this should be possible with Anoma applications just as it is with proto-intent protocols like Wyvern, Seaport, 0x, etc. - we have not yet implemented this, though, so we don’t know the concrete gas costs yet.

2 Likes

We aim to support atomic composability with normal EVM state - this should be possible with Anoma applications just as it is with proto-intent protocols like Wyvern, Seaport, 0x, etc. - we have not yet implemented this, though, so we don’t know the concrete gas costs yet.

I see, that’s great! In that case I’d imagine intents created with a protocol adapter would have to adhere exclusively to the EVM’s ordering and consensus rules then right? So you’d lose some flexibility.

Somewhat related - Is composability with the rest of the Anoma network (say someone’s running an anoma adapter&sidecar in SVM on Solana) still feasible? What are the challenges there & advantages over incumbent general message passing interoperability protocols?

1 Like

Sort of - intents settled on a protocol adapter would use the EVM chain (e.g. Ethereum main chain)'s ordering and consensus rules, yes, and they would pay EVM gas fees, etc. - but the intent language and format is agnostic to where the intent is settled - so, for example, you could craft an intent which could be settled either on the EVM main chain (using the protocol adapter) or on another chain with a native implementation of the resource machine, depending on where the assets you’re looking for are available at the best price.

Protocol composability is feasible anywhere that uses the resource machine directly or where a protocol adapter is deployed (e.g. EVM/SVM, eventually). Atomic transaction composability (“synchronous composability”) requires using the same consensus (this is always a requirement of distributed consensus and has nothing to do with the application protocol). I would describe what Anoma does as more of a “distributed state synchronization” protocol than a “message-passing” protocol - it’s designed to provide a single virtualized distributed database, not a “send message to chain X” type abstraction - so these protocols would be complementary and interoperable, although which ones you need will depend on the specific application interaction. We have a paper on full details of this state synchronization system in the works that I expect will be published in the next few months.

2 Likes

I might post more questions later, but for now just this:

The ARM can enforce some policies related to Anoma resources, e.g., we can guarantee that the resources are not created/consumed if the resource logics are not satisfied. If we want to have these guarantees for assets from other chains, do we have some kind of correspondence between these assets and native Anoma resources? E.g. Alice wants to get 1 ETH for 1 NAM through Anoma, there is a resource that corresponds to 1 ETH and a resource that corresponds to 1 NAM and there is a connection between them that works in a way that if the logic isn’t satisfied the resource representations are not exchanged, and if the resource representations are not exchanged then the original assets are also not exchanged. It doesn’t have to be this version where each token corresponds to a resource, might be something more sophisticated, but I’m just wondering if there will be this mechanism and ideally also would like to know how it works

2 Likes

Great question! There are a few different options for establishing this correspondence, and I’m not yet sure which one is best (there may also be others I haven’t thought of). For simplicity, I’ll just talk about the resource machine and the EVM here, but the patterns should generalize.

I also want to note that the questions of assets and VMs are distinct - one could have “1 ETH” just as a normal resource in the resource machine, secured by the Ethereum main chain (as terminal controller), where proofs are just published to the protocol adapter in order to consume that resource.

Option 1: EVM state as a single resource

This is the simplest option, and probably the most natural for integrating resource machine applications with current EVM chains. In this option, an entire EVM state is kept (committed to, but probably stored elsewhere) in a single resource, and every time that resource is consumed/recreated, the EVM state transition function is run to check validity - so many smart contracts live “within” a resource. An invariant like the one you mention (token swap) could be checked by reading e.g. ERC20.balanceOf in the consumed & created resources and comparing them.

Option 2: One resource per smart contract

This is a fancier option which allows for parallelism, but will require some additional plumbing. In this option, the state of each smart contract is kept (committed to, but probably stored elsewhere) in a resource dedicated to that smart contract, and every time the state of that smart contract is changed, the EVM state transition function is run to check validity - but only for the state changes of that contract. An invariant like the one you mention could be checked in a similar way, where multiple resources may be involved. This option is more complex as the updates to the EVM chain must be mapped to updates to multiple resources (instead of a single one).

2 Likes

Can you explain how the ordering machine fits into the picture? It sounds like its a blockchain. How does that square with this comment?

1 Like

Is the EVM state resource logic the EVM state transition function in the first option?

An invariant like the one you mention (token swap) could be checked by reading e.g. ERC20.balanceOf in the consumed & created resources and comparing them.

Do you mean the EVM state resource by consumed and created resources or are there some other resources involved?

In the second option, would user accounts be included? So, one resource per smart contract, where a smart contract can be a user account or an “actual” smart contract? If I understand it correctly, the AA allows to unify these two things to some extent and just want to clarify if you mean the “new” definition

2 Likes

The ordering machine is an abstraction; specifically, a collection of engines which handle transaction collection (mempool), ordering (consensus), and execution tasks. Instances of the ordering machine can produce linked lists (or, for that matter, DAGs) of blocks, so in that sense one could say that an instance of the ordering machine produces a blockchain, yes. Anoma nodes can instantiate new ordering machine instances on demand in accordance with intents on the network. For example, suppose that I wanted “apriori consensus” to order and execute some transactions for me - I’d send those to you (your node), and you could decide whether or not you wanted to produce and sign some blocks including those transactions.

1 Like

Yes, exactly.

Yes, I mean (in this case):

  • read tokenOfInterest.balanceOf(myAccount) in the consumed resource (old state)
  • read tokenOfInterest.balanceOf(myAccount) in the created resource (new state)
  • your balance change is the difference

One resource per smart contract, yes. I think Ethereum EOAs should be representable as resources as well (just a signature check and nonce) - certainly, if they can be emulated by a smart contract, it must be possible - but I haven’t worked through all the engineering details.

2 Likes

Do you mean new programming languages or just specific formats?

I think it is great to build permissionless infrastructure. But, one question is why is there a focus on complex intent-centric applications, rather than simple applications that have real use and value? Where is the demand for complexity coming from?

Ah, I mean specific formats, or perhaps domain-specific languages within a host language. For example, {"type":"storage","duration":"21 days","data":"0x123..."} (pseudocode) could be the form of a temporary storage (“data availability”) commitment.

1 Like

Interesting question - I’m not sure that I would necessarily characterize intent-centric applications as more or less complex than some other kind of applications (or simple applications). Which simple and complex applications specifically do you have in mind?

In this scenario, rollups/L2s would not be required to run the side-car / adaptor as outlined below because they would (hypothetically) use the heterogeneous node architecture to run the EVM.

Is this a misconception?

What would the compilation pipeline look like for an app developer here?