Thanks to Ben Jones and Karl Floersch for clarifying discussions of the problem, and to Yan Zhang and Barnabé Monnot for helpful feedback on the presentation.
Dispute protocols are fundamentally about proving things: a proposition A about events off chain needs to be decided on chain, so an adversarial game is set up on chain between a player affirming A and a player denying A whose rules have been designed so that the correct player has a winning strategy.
Example. Alice and Bob want a system to transfer an asset back and forth arbitrarily many times without any transactions on chain except for an initial deposit and a final withdrawal. To do this, they agree that the asset will be held on chain by a contract C
, and that transferring ownership of the asset off chain will be signified by sending the other party a signature of the current time. Ownership of the asset by (respectively) Alice and Bob is signified by the propositions
and for (say) Alice to withdraw the asset on chain, she must prove the proposition A by winning the following game, which is adjudicated by the contract C
:
 Alice is asked for a time t, and a signature by Bob of t. If Alice fails to provide these in reasonable time, then the game ends and Bob wins; otherwise:
 Bob is asked for a time t', where t \leq t', and a signature by Alice of t'. If Bob fails to provide these in reasonable time, then the game ends and Alice wins; otherwise:
 The game ends and Bob wins.
Given the resemblance between the game and the proposition it decides, a natural question one might ask is: Instead of inventing and implementing such games and strategies de novo for each new dispute protocol, can we derive them automatically from the propositions and proofs they are fundamentally about?
This question was first asked and investigated by Ben Jones and Karl Floersch in their work on “predicate contracts” and later “optimistic game semantics”. Inspired by dialogical logic, their work specifies a “universal adjudicator” that can interpret any proposition as a game.
This post proposes a different approach to the same question, based on type theory and ludics, which enables us to specify simultaneously a “universal adjudicator” and a corresponding “universal advocate” that can interpret any proof of a proposition as a winning strategy for the corresponding game.
The first two sections are mostly review of background material (with some liberties taken to keep the presentation simple). Readers already familiar with propositionsastypes can skip directly to the third and last section titled “Games and strategies”.
Dialogical logic
Dialogical logic interprets a firstorder logic proposition A as a dialogue game between a proponent \pro and an opponent \op. The game starts with \pro asserting A, after which players take alternating turns either attacking a past assertion of their adversary or defending against a past attack by their adversary, both of which may involve making new assertions, subject to the constraint that an atomic proposition if asserted must be true. The possible attacks and defenses are listed in the following table. A player loses the game on their turn if no “productive” moves are possible.
Assertion  Attack  Defense  Comment 

\lnot A 
\attack{\lnot}{} assert A 
no defense possible  
A \to B 
\attack{\to}{} assert A 
assert B  
A_1 \land A_2  \attack{\land}{(i)}  assert A_i  attacker’s choice of i 
A_1 \lor A_2  \attack{\lor}{}  assert A_i  defender’s choice of i 
\forall x . A  \attack{\forall}{(t)}  assert A[x \mapsto t]  attacker’s choice of t 
\exists x . A  \attack{\exists}{}  assert A[x \mapsto t]  defender’s choice of t 
Example. A play of the game corresponding to \lnot (A \land \lnot A) could go as follows (assuming the atomic proposition A is true; otherwise \op would lose the game on turn 4):
 \pro starts, asserting \lnot (A \land \lnot A)
 \op attacks move 1’s assertion with \attack{\lnot}{}, asserting A \land \lnot A
 \pro attacks move 2’s assertion with \attack{\land}{(1)}
 \op defends against move 3’s attack, asserting A
 \pro attacks move 2’s assertion with \attack{\land}{(2)}
 \op defends against move 5’s attack, asserting \lnot A
 \pro attacks move 6’s assertion with \attack{\lnot}{}, asserting A
 \op loses the game
These games have the necessary property that there is a winning \pro strategy for any true proposition and a winning \op strategy for any false proposition.
Type theory
If we want to write proofs and interpret them as winning strategies, we need a formal proof system. Arguably the best choice of formal proof system today is to be found in type theory. Most stateoftheart software for theorem proving (e.g. Coq, Lean, Idris) is based on it, and even programmers with no background in formal logic but with some experience in functional languages will find it familiar.
The philosophy of type theory in essence is that proving a proposition means constructing a mathematical object that makes its truth evident. For example, proving that two things are isomorphic means constructing an isomorphism between them. In general, every proposition is understood as specifying a type of mathematical object to construct (which of course is sometimes possible and sometimes not).
Proposition  Evidence  Set analogy 

A \to B  procedure to transform evidence for A into evidence for B  B ^ A 
A \land B  both evidence for A and evidence for B  A \times B 
A \lor B  either evidence for A or evidence for B  A + B 
\top  trivial  1 
\bot  impossible  0 
(\lnot A is understood to be synonymous with A \to \bot.)
A type theory (with the indefinite article) is a formal language for expressing these constructions. There are many different type theories, and there is no allencompassing definition of what exactly it means to be a type theory (as with the concept of “space” in mathematics), but generally speaking a type theory looks something like the following.
There is a grammar of types (here just enough for propositional logic):
and a grammar of expressions or terms representing constructions (treated as abstract binding trees):
The typing relation or typing judgment "e is of type A", written
states that the construction expressed by the term e satisfies the specification expressed by the type A. A construction can be parameterized by a set of variables x_1, x_2, \ldots respectively assumed to be of some types A_1, A_2, \ldots, so the typing judgment is generalized to "e is of type A in context \Gamma = x_1 : A_1, x_2 : A_2, \ldots", written
Typing rules define when a typing judgment (the conclusion, appearing below the line) can be derived from other typing judgments (the premises, appearing above the line).
Introduction rules govern how to produce things of a type:
Elimination rules govern how to consume things of a type:
Finally, structural rules govern general features like the use of variables:
Example. Here is a term proving \lnot(A \land \lnot A):
and here is its typing derivation (suppressing unused hypotheses in contexts to save space):
Computation
When different terms express the same construction is a notoriously subtle question, but at minimum there is a congruence, historically named \betareduction, generated by “cancellations” of introductions and eliminations:
where \subi{e'}{x}{e} denotes the (captureavoiding) substitution of e for x in e'.
Simplifying terms with \betareduction converts an implicit representation of a construction (like “1 + 7 + 49 + 343”) into an explicit one (like “400”), a process which can be thought of philosophically as simulating the mental act of realizing the construction.
A (closed) term that is fully reduced (ignoring subterms under binders) is a value:
The next step of reduction is not unique in general, and is disambiguated with a reduction relation or operational semantics:
All of these definitions together ensure that welltyped terms eventually reduce to a \betaequivalent value of the same type.
Theorem. If e : A then e \rightsquigarrow \ldots \rightsquigarrow e' where e \sim e' and e' : A and e' \textsf{ value}.
This process can be automated, effectively making a type theory a (terminating!) programming language. This is the celebrated propositionsastypes/proofsasprograms correspondence.
Games and strategies
With a formal proof system now in hand, we could attempt to transform proofs (i.e. terms) into strategies for the dialogue games described previously. This can indeed be done with more machinery, however I want to propose instead a simpler approach inspired by ludics, which takes advantage of the intrinsic computational aspect of type theory to interpret types as games that are naturally suited to interpreting terms as strategies.
To recapitulate, the goal is to transform:
 a type A into a game between players \pro and \op
 a term e_\pro : A into a winning strategy for \pro
 a term e_\op : A \to \bot into a winning strategy for \op
A trivial noninteractive solution would be simply to require players to provide a term to win, but this runs into problems with atomic propositions about real events, for example “a hash preimage of \texttt{0x123} has been revealed”. Since evidence for this event would be the preimage in question, it naturally corresponds to a primitive type S (for “secret”) whose values are hash preimages of \texttt{0x123}. The problem with the trivial game for this type is that while \pro can win when it is true, \op can never win even when it is false because there is no term of type S \to \bot.
From a computational point of view, we can still ask if the behavior specified by this type can be “safely” implemented by “unsafe” code (like Rust’s unsafe
blocks). After all, as long as constructing a value of type S is really impossible, if there was a function \funI{x}{e} of type S \to \bot, then the body e would be unreachable code anyway, so even illformed code could not cause a runtime error.
To allow such implementations, we introduce an “unsafe primitive” called a hole (corresponding in ludics to the daimon \maltese):
A hole can occur at any type:
and its behavior is to abort the evaluation (like a fatal exception):
Now terms eventually either reduce to a value or get stuck on a hole.
Theorem. If e : A then e \rightsquigarrow \ldots \rightsquigarrow e' where e \sim e' and e' : A and either e' \textsf{ value} or e' \textsf{ stuck } \hole{h} for some \hole{h} contained in e.
Let us say that a term not containing holes is total, a term possibly containing holes is partial, and a partial term e is safe when the holes it contains are unreachable by evaluation, i.e. there is no partial term e' containing e that gets stuck on a hole contained in e. In particular, total terms are trivially safe. In the case of the type S \to \bot, there is no total term, but if (and only if) no value of type S has been revealed, then there is a safe partial term \funI{x}{\hole{h}}.
The intention is that safe partial terms should also yield winning strategies. Now the goal is to transform:
 a type A into a game between players \pro and \op
 a safe partial term e_\pro : A into a winning strategy for \pro
 a safe partial term e_\op : A \to \bot into a winning strategy for \op
This is accomplished by the following game for a type A:

\pro and \op respectively provide partial terms\begin{alignat*}{1} e_\pro &: A \\ e_\op &: A \to \bot \end{alignat*}
 The partial term\funE{e_\op}{e_\pro} : \botis reduced until it gets stuck on a hole contained in either e_\pro or e_\op (which necessarily happens since there is no value of type \bot to which it can reduce).
 The player on whose hole the reduction is stuck loses the game.