In econ it’s well known that “ambiguous devices” have all kinds of useful and interesting mechanism design properties (e.g. see this 2024 Econometrica). It turns out that a blockchain + a pathological RNG called Machine II can create a very general purpose “ambiguous device”. You can see a demo running in a TEE here.
Here’s the idea for AMMs. Suppose we treat LPing as a classic Exclusion Game in which the LP wants to make profit but traders can “exclude” you from the profitability region (LVR let’s say). The game models the dilemma for the LP as basically:
Charge high fees to get some arbitrage protection, but at the risk of scaring off retail.
Charge low fees to attract retail but get arbed.
It turns out that in such a game, using an ambiguous device (“veiling your fees”) can really improve the situation for the LP. The math can be a little tedious and unfamiliar, so I think it’s useful to show pictures of what veiled fees do to the game.
Below is the “problem picture”, showing that the LP has no viable strategy to be profitable. The unique Nash Equilibrium pays \le 0 and so they are excluded from the green region (T).
When you introduce veiled fees the LP’s payoffs are not fixed but now lie in the full blue region. Clearly LP is no longer excluded from profitable region T (in fact they can even reach the highest possible payoff way off to the right.) Further, the blue region is clearly truncated on the left, so the LP risks very little for this.
To establish actual payoff numbers, one needs to make an assumption about how exactly the LP treats their payoff uncertainty. Obviously the LP would have to be pretty averse to uncertainty to not see the blue region as an improvement over the Nash in the earlier figure.
If we assume neutrality for the LP, such that their best and worst payoffs are evaluated at the halfway point, the LP improves their payoffs from 0 (under the Mixed Nash) all the way to 1.4 (under the Veiled Nash).
You can read more in the draft paper here. I will say I’ve run this approach by a decent proportion of the high end crypto brain trust at this point and thus far no substantial objections have been raised. It’s a little shocking to some that you can pareto dominate a unique mixed strategy Nash Equilibrium (it was to me when I first saw it) but here in 2025 this is mainstream–if cutting edge–game theory.
Even though each individual Machine II sequence doesn’t converge to a single average, the expected value across all Machine II sequences at any position would have some definite value expected by the trader. If the selection process of the veiled sequence is truly random and unbiased, this expected value would be 0.75 when the range of possible outputs is [0.5, 1.0]. The trader always expects a 75% chance of highfee. Thus the informed trader is able to use a mixed nash which you demonstrated does not contain profit for the LP.
The issue is the random selection of a Machine 2 sequence undoes the benefits of using an unpredictable sequence. For a visualization imagine plotting the Y outputs of an infinite number of machine II sequences that stay between 0,5 and 1. Now every location on the x axis would be equal density for every Y value meaning using a randomly selected unbiased machine 2 sequence [0.5, 1.0] is the same thing as just picking high fee with 75% odds.
Lens: I read the entire paper, I am a game theoretic mechanism designer of 8 years experience, and I have ran into this situation before where randomness at a higher level overrides the lower level.
Thanks for giving it a read. Maybe the LVR draft isn’t clear enough about how Machine II is being used (the other paper directly about Machine II is more comprehensive.) Committing to an encrypted Machine II draw to resolve the exchange is where the EV-denying magic is. Certainly revealing the RNG ahead of time would not work
The non-convergent sequence is what denies vanilla expected value approach. And in most cases if the trader just responds ‘as if’ it was a mixed strategy they will do worse. Also worth noting that there are veiled equilibria where both players do better-- for instance if the LP veils over (.450,1) then the trader gets .50 more (and the LP gets .25 less.)
NP. I initially responded from my phone half asleep. I will try to make my point clearer so we can find the misunderstanding or disagreement.
I understand it is used to resolve the exchange after committing to trade, and not revealed ahead of time.
I understand that this relies on a non-convergent sequence that would result in the trader being unable to pick a probability that is profitable even when averaging over a long time span. It could be lowFee for a year straight or highFee for 10 years straight and no timespan is long enough to find an average.
Consider the traders perspective on a single trade. The LP is choosing lowFee or highFee using a machine II sequence bounded between [0.5, 1]. The LP has secretly selected from an infinite number of Machine II sequences, choosing Sequence S. I assume the set of Machine II sequences has no built in bias towards some output. Consider the probability of the first output of this sequence being between 0.5 and 0.6. It is 20%. The same goes for 0.7 to 0.8 , 20%. There is equal probability of selecting each probability. Thus the probability the first trader faces of getting highFee is 75%, the same as if the LP just choose a random number from 0.5 to 1. If you average together the * next * output of an infinite number of machine II sequences bounded from 0.5 to 1, the average will be exactly 0.75.
I really don’t like using AI as it usually gets stuff like this wrong, but here is Claude’s attempt to explain the above more academically:
The paper assumes that using a non-convergent sequence (Machine II) bounded between [0.5, 1.0] creates an advantage over a standard mixed strategy with p* = 0.75. However, from the trader’s perspective, if the LP randomly selects from all possible Machine II sequences without bias, the expected probability of high fees on any individual trade remains 0.75.
The core issue is that randomizing over non-convergent sequences doesn’t eliminate the trader’s ability to form rational expectations about the next trade. While individual Machine II sequences don’t converge to a single average, the distribution of possible outcomes for the next trade does have a well-defined expected value (0.75).
The random selection of a Machine II sequence creates a meta-distribution with mean 0.75, functionally equivalent to the mixed Nash equilibrium from the trader’s decision-making perspective
The trader doesn’t need to know which specific sequence was selected to optimize their strategy - they only need the expected probability for the upcoming trade