A generalization of Nash's theorem with higher-order functionals

The recent theory of sequential games and selection functions by Escardó & Oliva is extended to games in which players move simultaneously. The Nash existence theorem for mixed-strategy equilibria of finite games is generalized to games defined by selection functions. A normal form construction is given, which generalizes the game-theoretic normal form, and its soundness is proved. Minimax strategies also generalize to the new class of games, and are computed by the Berardi–Bezem–Coquand functional, studied in proof theory as an interpretation of the axiom of countable choice.


Introduction
The notion of optimization is common to many areas of applied mathematics, such as game theory and linear and nonlinear programming. Typically, we have a set X of choices and a function p mapping each x ∈ X to a real number p(x), which we might call the value or cost of x. From this we can define a natural notion of optimality: a point y ∈ R is optimal just if y ≥ p(x), for all x ∈ X and y = p(x 0 ) for some x 0 ∈ X. We usually refer to y by a notation such as y = max x∈X p(x).
The point x 0 is also interesting: it is a point at which p attains its optimal value, and we refer to it as (Of course, while y is guaranteed to be unique when it exists, x 0 is not necessarily unique; we only require that arg max chooses some value for Let X be a finite set, so that max x∈X p(x) is well defined for all functions p : X → R. We can now define a function by ϕ(p) = max x∈X p(x).
ϕ has range R, and its domain is the function set X → R, that is, the set of all functions with domain X and range R. We, therefore, write ϕ : (X → R) → R.
We call ϕ a higher-order function (or functional), that is, a function whose domain is itself a set of functions. We can also define ε(p) = arg max x∈X p(x), obtaining a higher-order function ε : (X → R) → X satisfying ϕ(p) = p(ε(p)) for all p : X → R.
Using the concept of a higher-order function, we can make a large generalization of the properties of max and arg max. For any sets X and R, a function ϕ : (X → R) → R will be called a quantifier and a function ε : (X → R) → X will be called a selection function. Intuitively, a quantifier is a rule for converting a function p : X → R into an element of R considered, in an abstract sense, to be the 'most desirable', and a selection function produces instead a value in X at which p takes its most desirable value. We say that ε attains ϕ just if ϕ(p) = p(ε(p)) for all p : X → R.
max and arg max are the prototypical examples of a quantifier and a selection function attaining it. A very different example is a fixed point operator μ : (X → X) → X, which has the property that μ(p) is always a fixed point of p, that is, μ(p) = p(μ(p)). Although this cannot be understood as a set-theoretic function, it is well defined in models of partial computable functionals, and there can be seen as both a quantifier and a selection function, attaining itself. Quantifiers where R is the set of truth-values appear naturally in logic, and include the classical quantifiers ∀ and ∃ (this is the reason for the name quantifier). A slightly more general notion of multi-valued quantifier is defined in the next section. These concepts were introduced and applied to the theory of sequential games by Escardó & Oliva [1] in a series of papers summarized in [1].
What is a game? Typically, some players take turns choosing between sets of legal moves, which may be constrained by previous players' moves. The sequence of moves made by the players is called a play of the game. Usually, the rules of the game guarantee that every play terminates after a finite number of moves, and then uniquely determine which player has won the play.
In the theory of games as introduced by von Neumann & Morgenstern [2], the notion of a player winning a play is not used. Rather, for each player, the rules of the game define an outcome function mapping each play of the game to a real number called the utility of the play for that player. This generalization is important for applications of game theory to economics, where utility often represents profit. In the game played by two competing firms, for example, each firm is interested in maximizing its own profit, and does not care (in the short term, at least) how much profit its competitor makes. Of course, a firm's profits will be affected by the moves of its competitor, and vice versa. A central problem of game theory is to determine which moves each player should choose in order to maximize their utility. The theory of games as surveyed for example in [3] will be referred to as classical game theory. Suppose, during the course of a play, some player must choose between some set X of moves. Taking the usual assumption of common knowledge of rationality (that is, the players play optimally, and they know that each other will play optimally, and so on) the future of the play after making each choice of x ∈ X is sufficiently well determined that a utility p(x) ∈ R can be assigned to each x ∈ X. In classical game theory, a rational player will always choose arg max x∈X p(x). By replacing R with an arbitrary set R and arg max with an arbitrary selection function ε : (X → R) → X, a rich theory of generalized games results, with deep connections to proof theory and theoretical computer science [4,5].
The games that have been described so far are the so-called sequential games. In the more usual language of classical game theory, this can be read as non-branching extensive form games of perfect information. However, there are games that cannot be described as a sequence of moves. These are the so-called simultaneous games or games of imperfect information. A well-known example is rock-paper-scissors; a more important example is the simultaneous pricing of goods by supermarkets. von Neumann and Morgenstern proved that every game can be described as a simultaneous game, called its normal or strategic form. The central idea of this proof is that players simultaneously choose contingent strategies, higher-order functions that choose the next move given the partial play up to that point, and so play the game on behalf of the player. In this paper, we consider a notion of simultaneous games that encompasses Escardó and Oliva's generalized sequential games in a similar way. Given the number of applications of generalized sequential games (for example to the technology of program extraction, or the development of provably correct software), it is hoped that similar applications will appear for generalized simultaneous games, although these are not investigated in this paper.
In §3, generalized simultaneous games and their appropriate notion of equilibrium are defined. In §4 a class of games, the so-called multilinear games, is defined, and it is proved that games of this kind always have an equilibrium (theorem 4.13). This is used in §5 to prove the key result of this paper (theorem 5.7), a natural generalization of Nash's theorem for the existence of mixed-strategy equilibria to games defined by arbitrary quantifiers. In §6, a mapping from sequential to simultaneous games is defined analogous to the normal form construction in the classical theory, and its soundness is proved (theorem 6.4). In §7, we show an interesting connection to proof theory, namely that the binary Barardi-Bezem-Coquand functional computes minimax strategies of games, a result that suggests a deeper connection between proof theory and generalized games.

Preliminaries
If X and Y are sets, then X → Y denotes the set of all functions with domain X and range Y (this is often denoted Y X , a notation we avoid in order to avoid writing exponential towers for higher-order functions). Cartesian products of sets are denoted and bind tighter then →, so for example, i∈I X i → R means ( i∈I X i ) → R. The ith coordinate projection of a tuple π ∈ i∈I X i is denoted π i . The following piece of notation, for manipulating products, will be helpful. Let I be a set and let X i be a set for each i ∈ I. If x ∈ X i and π ∈ j∈I X j , then we define π(i → x) ∈ j∈I X j by We make use of Church's λ-notation for describing functions anonymously. The function that might otherwise be written as x → 1 + x will be denoted λx N · 1 + x, where N is the domain of the anonymous function. For example, we have (λx N · 1 + x)(42) = 43. A variable bound by a λ need not appear under the scope of the λ, for example, λx X · 42 is the constant function with the property that (λx X · 42)(x ) = 42, for all x ∈ X.
A quantifier is a function the single-valued quantifiers discussed in the previous section, by allowing zero or more distinct 'optimal' values in R. For example, we can interpret a fixed-point operator set-theoretically as a multi-valued quantifier, by letting μ(p) be the (possibly empty) set of all fixed points of the function p ∈ X → X.
The domain of a quantifier is A quantifier with dom(ϕ) = X → R will be called total.
for all p ∈ dom(ϕ). This definition of attainment differs from Escardó and Oliva's, who require the condition to hold for all p ∈ X → R. For a total quantifier (which are considered in §5, and to which the main theorem applies), the two definitions coincide. For example, if R = R and X is a compact topological space, then the extreme value theorem (plus the axiom of choice) implies that the maximum quantifier is attained. (Note that we need the axiom of choice to collect all the values into a single function.) A quantifier such as this whose values have cardinality at most 1 will be called single-valued.
Because the definition of ϕ is clumsy, we can use a new notation for single-valued quantifiers, such as We assume some point-set topology as covered, for example, in [7] and elementary properties of topological vector spaces [8]. All topological vector spaces are assumed to be T 1 throughout (this is no loss of generality because quotienting a topological vector space by the closure of {0} always yields a Hausdorff space). For reference, a subset S of a real vector space is called convex iff for all x, y ∈ S and t ∈ [0, 1], we have tx + (1 − t)y ∈ S.
In §4, we work with the class of locally convex spaces. The definition of a locally convex space is technical and not necessary for our purposes; beyond theorem 4.5 and lemma 4.6, we only need to know that every locally convex space is a topological vector space. Every normed vector space is locally convex; examples of locally convex spaces that are not normable include the spaces of smooth functions C ∞ (R) and C ∞ ([0, 1]), and the space of real-valued sequences R ω with convergence defined pointwise. Locally convex spaces are covered in detail in [8].

(a) A note on foundations
It is possible to define generalized sequential games over models other than classical set theory. Indeed, as explained in [1], it is sometimes necessary to work in non-standard models, for example when considering unbounded sequential games (which are not considered in this paper). In particular, unbounded games are known to be well-behaved in the models of continuous functionals [9] and majorizable functionals [10]. The operation J R (X) = (X → R) → X is a (strong) monad and can be defined over any Cartesian closed category. Moreover, the closely related K R (X) = (X → R) → R, which contains the total single-valued quantifiers, is already well known from programming language theory, where it is called the continuation monad, introduced in the classic paper [11]. The definitions of generalized simultaneous game and generalized Nash equilibrium could be formalized in a more general setting, but the proofs in §4 use classical set theory in an essential way, so we find it easier to avoid foundational issues altogether and work entirely in classical set theory.

Generalized simultaneous games
In this section, we define the objects studied in this paper, namely generalized simultaneous games and generalized Nash equilibria. The definition of a generalized simultaneous game comes from the classical definition of a normal-form game, but with the maximizing behaviour of players replaced with a specified quantifier. For the general definition, we do not require the number of players to be finite. The related notion of generalized sequential game will be defined in §6.

Definition 3.1 (generalized simultaneous game).
A generalized simultaneous game (with multiple outcome spaces), denoted simply game when not ambiguous, is a tuple where I is a non-empty set of players, and for each i ∈ I, -X i is a non-empty set of strategies for player i; -R i is a set of outcomes for player i; We say that G has a single outcome space, if the R i are equal and the q i are equal. In this case, G is determined by a tuple An element x ∈ X i is called a strategy for player i for G. A tuple π ∈ S is called a strategy profile or strategy for G. Throughout this paper the variables π , σ and τ will range over strategies of a game.
In general, we need games with multiple outcome spaces to study simultaneous games, and in particular, to recover the classical Nash theorem. However, normal forms of generalized sequential games will always have a single outcome space.
The appropriate notion of equilibrium of a generalized simultaneous game is called a generalized Nash equilibrium. Before making this definition, we first define some notation used throughout this paper. First, we define the family of unilateral maps U i q , which are used as a shorthand notation but, when considered as a higher-order functions, are also natural and interesting in their own right.

Definition 3.2 (unilateral map).
Let I be a set, and for each i ∈ I, let X i and R i be sets. Let q = (q i ) i∈I be a family of maps such that each Thus, the ith unilateral map computes the outcomes of unilateral changes of strategy by the ith player in a game. Second, we associate to every quantifier a set called its diagonal.

Definition 3.3 (diagonal of a quantifier).
Let ϕ ∈ S R (X) be a quantifier. The diagonal of ϕ is Now, the equilibria of a generalized simultaneous game can be defined in a very compact and (as will be seen) useful way.  3.4 (generalized Nash equilibrium). Let G be a game with strategy space S. We define the best response correspondence B ∈ S → P(S) of G by where the B i ∈ S → P(S) are defined by A generalized Nash equilibrium of G is a fixed point of B, that is, a strategy profile π such that π ∈ B(π ).
Unpacking this definition, we see that π is a generalized Nash equilibrium of G iff for each i ∈ I, we have When X i is compact, q i is continuous and ϕ i is the quantifier which is the usual definition of a Nash equilibrium.

Multilinear games
Now we define a large family of games, called the multilinear games, that are guaranteed to have a generalized Nash equilibrium. The structure of the argument is the same as that in [12], but given in more generality to deal with more general quantifiers. This section can be seen as a series of lemmas that are eventually used to prove theorem 5.7 (the generalization of Nash's theorem) in the next section. In fact the machinery developed in this section is stronger than necessary to prove theorem 5.7: it should also be possible using multilinear games to prove an analogous generalization of Glicksberg's theorem [13], which in turn generalizes Nash's theorem from finite sets of strategies to compact spaces of strategies, with continuous outcome functions.

Definition 4.1 (closed graph property).
Let X and Y be topological spaces and F ∈ X → P(Y). We say that F has closed graph iff is closed with respect to the product topology.
The closed graph property is a form of continuity for functions whose range is a set of subsets of a topological space.
In order to guarantee that a generalized simultaneous game will have an equilibrium, we need to impose closed graph properties on the quantifiers. However, the domain of a quantifier is a function set, which in general has no unique natural topology. The least we need is that the unilateral maps are continuous, and so for this reason, we define the unilateral topology.

Definition 4.2 (unilateral topology).
For each i ∈ I, let X i and R i be topological spaces with q i ∈ X i → R i continuous. The unilateral topology on X i → R i is the final topology with respect to the singleton family {U i q }, that is, it is the largest topology with respect to which U i q is continuous. A function which is continuous with respect to the unilateral topology will be called unilaterally continuous, and a function that has closed graph with respect to the unilateral topology has unilaterally closed graph. Another possible topology on X i → R i which will be useful is the topology of pointwise convergence. Most of this paper could be formulated using only pointwise convergence, except for an interesting example at the end of this section that needs a finer topology, namely uniform convergence.

Lemma 4.3. The unilateral topology is finer than the topology of pointwise convergence.
Proof. It must be proved that U i q is continuous with respect to the topology of pointwise convergence. Let π j −→ π be a convergent sequence in k∈I X k , and let x ∈ X i . We have in the product topology, so Now, we can give the definition of a multilinear game. This definition essentially contains the least assumptions needed for Nash's proof.

Definition 4.4 (multilinear game). A game
is a compact and convex subset of a given locally convex space V i over R; (ii) each R i is a topological vector space over R; (iii) each q i extends to a continuous multilinear map (that is, q i is linear with respect to each V j separately); and (iv) each ϕ i has unilaterally closed graph, ϕ i (p) is closed and convex for all p ∈ X i → R i , and dom(ϕ i ) ⊇ im(U i q ).
(Note that because q i is continuous and multilinear, to satisfy the last condition it suffices that ϕ i (p) = ∅ whenever p is continuous and linear. Note also that if ϕ i is single-valued, then each ϕ i (p) is automatically closed and convex.) The idea of the existence proof is to reduce to the following fixed point theorem. [13,14]). Let S be a non-empty, compact and convex subset of a locally convex space over R. Let B ∈ S → P(S) have closed graph and let B(π ) be non-empty, closed and convex for all π ∈ S. Then B has a fixed point, that is, there is a point π ∈ S such that π ∈ B(π ).

Theorem 4.5 (Kakutani-Fan-Glicksberg fixed point theorem
We will need to use the fact that locally convex spaces are closed under arbitrary products. Lemma 4.6. Let {V i } i∈I be a family of locally convex spaces over a field K. Then i∈I V i has the structure of a locally convex space over K whose topology is the product topology.
Much of the usefulness of multilinear games comes down to the fact that their unilateral maps are well-behaved.
(under the Curry bijection A → (B → C) ∼ = A × B → C) and for each π ∈ S the map U i q (π ) ∈ X i → R i is linear.
Proof. By the continuity and multilinearity of the q i .   Proof. The strategy space is where the larger space is locally convex by lemma 4.6.
S is non-empty by the axiom of choice and compact by Tychonoff's theorem. Convexity is also inherited by the product, since for each i ∈ I we have Proof. Let S be the strategy space of G. Given π ∈ S, we define σ ∈ S to have ith component Because ε i attains ϕ i and U i q (π ) ∈ dom(ϕ i ) (definition 4.4, point (iv)), we have Therefore, so σ ∈ B(π ), as required. Proof. It suffices to prove that each factor by the continuity of U i q . We also have that each , and the right-hand side is closed (definition 4.4, point (iv)), therefore, that is  Proof. Suppose σ , τ ∈ B(π ) and t ∈ [0, 1]. Let i ∈ I. By definition we have By the linearity of U i q (π ), we have Because the ϕ i (p) are convex (definition 4.4, point (iv)), we have Therefore, Lemma 4.12. Let G be a multilinear game with best response correspondence B. Then B has closed graph.
and so it suffices to prove these factors closed. Let (σ j , π j ) −→ (σ , π) be a convergent sequence in the ith factor. By the continuity of U i q , Because U i q is also unilaterally continuous as a map S → (X i → R i ), we have U i q (π j ) −→ U i q (π ) unilaterally. Therefore, we have a convergent sequence in the graph Γ (ϕ i ), which is closed (definition 4.4, point (iv)).

Theorem 4.13 (existence theorem for multilinear games). Let G be a multilinear game such that each quantifier is attained by a selection function. Then G has a generalized Nash equilibrium.
Proof. Let B be the best response correspondence of G. By lemmas 4.8-4.12 and the Kakutani-Fan-Glicksberg fixed point theorem, B has a fixed point.
Examples of multilinear games as mixed extensions of finite games are given in the next section. Another interesting example is given by integration. Let X i = [0, 1], V i = R and R i = R, and let L(X i ) be the set of all Lebesgue-integrable functions p ∈ [0, 1] → R with Define a single-valued quantifier ϕ i ∈ S R X i by Using the mean value theorem (and the axiom of choice), we can prove the existence of a selection function attaining ϕ i : for all p ∈ L(X i ) there exists ε i (p) ∈ X i such that This highly non-constructive selection function was briefly introduced as an example in [6]. We let I be finite and let the other X j be normed, so the strategy space is normed, and we can work with the ε − δ definitions of uniform convergence and continuity.

Lemma 4.14. If q i is uniformly continuous and
Proof. We have that q i is uniformly continuous, that is We also have π j −→ π , that is We want to prove that U i q (π j ) −→ U i q (π ) uniformly, that is Let ε > 0. By (4.1), we have δ > 0 with the given property. We take ε in (4.2) to be this δ, obtaining N. Let j ≥ N, therefore, |π j − π | < δ by (4.2). Let x ∈ X. The crucial observation is that π j (i → x) behaves like π j but is constant in its ith coordinate. That is, we have Now, we take π , σ in (4.1) to be π j (i → x) and π(i → x). We have already proved the antecedent in (4.1), therefore We have proved that the unilateral topology is finer than the topology of uniform convergence.

Lemma 4.15. ϕ i is unilaterally continuous.
Proof. Suppose we have p j −→ p uniformly in L(X i ). Because the convergence of the integrands is uniform, we can apply the uniform convergence theorem to get Because the unilateral topology is finer than the topology of uniform convergence, we are done.
In the one-player game defined by the integration quantifier with outcome function q, the unique value of q(x) when x is an equilibrium strategy, which can be called the expected outcome of the game, is simply In the two-player game with both quantifiers integrals, a generalized Nash equilibrium (a, b) satisfies a =

Finite games
In this section, we apply the existence theorem for multilinear games to prove a suitable generalization of Nash's theorem. The classical version of Nash's theorem guarantees that every finite game (that is, a classical game in which each player has finitely many strategies) has a mixed strategy Nash equilibrium.
The notion of mixed strategies means that we consider probability distributions over ordinary strategies (referred to as pure strategies for clarity). The outcome functions also need to be replaced by expected outcome functions. However, the discussion of probability distributions can be avoided by treating them as geometric objects, namely simplices. This approach also makes it clearer how quantifiers and selection functions must be modified when passing to mixed strategies. A probabilistic interpretation of the resulting theorem is possible, but is avoided in this paper.

Definition 5.1 (finite game). A game
-each X i is finite; -each R i is a topological vector space over R; and -each ϕ i is total, has closed graph with respect to the topology of pointwise convergence (viewing the X i as discrete topological spaces), and ϕ i (p) is closed and convex for all p ∈ X i → R i .
Note that restricting to pointwise convergence is no loss of generality here because the X i are finite.
The set of probability distributions over a finite set can be seen as a geometric object called a standard simplex. In two and three dimensions these can be easily visualized as a line segment and an equilateral triangle; the next simplex 4 is a tetrahedron seen as a subset of R 4 .

Definition 5.2 (standard simplex). The nth standard simplex is the set
x i = 1 and each x i ≥ 0 .

Definition 5.3 (mixed extension).
Let G = (I, (X i , R i , q i , ϕ i ) i∈I ) be a finite game with strategy space S. We define a game called the mixed extension of G as follows. Player i has move set which, assuming a fixed enumeration of X i , can be seen as a simplex with vertices labelled by player i's pure strategies (moreover, the coordinates of points in X * i can be seen as being labelled by player i's pure strategies). Player i's outcome function is given by the formula which computes the expected value of q i . The mixed strategy π is a tuple whose ith component π i is a point in the simplex X * i , and so (π i ) σ i is the coordinate of π i labelled by the pure strategy σ i .
here represents ordinary multiplication of real numbers, so i∈I (π i ) σ i is a real number.
represents addition of vectors in R i , and the multiplication of the terms in brackets is multiplication of the vector q i (σ ) by the real scalar i∈I (π i ) σ i , and so the entire formula is simply a linear combination in R i . (Note that the finiteness of I is used implicitly here: for the strategy space S to be finite it is necessary that I be finite, except in trivial cases when all but finitely many X i are singletons.) Finally, player i's quantifier is given by where δ i is the canonical injection X i → X * i mapping each j to the vertex of the simplex at which the jth coordinate is 1.

Definition 5.4 (mixed strategy generalized Nash equilibrium).
Let G be a finite game. A strategy profile for G * will be called a mixed strategy profile for G. A generalized Nash equilibrium of G * will be called a mixed strategy generalized Nash equilibrium of G.
The most important property of mixed extensions is that they are always multilinear. This will used to prove the generalized Nash theorem.
Lemma 5.5. Let G be a finite game. Then G * is a multilinear game.
Proof. Each n for n > 0 is a non-empty, compact and convex subset of the locally convex space R n . Continuity of the q * i is clear. q * i is multilinear because The ϕ * i (p) are of the form ϕ i (p ) for some p , and so are closed and convex. We note that ϕ * i is total because ϕ i is. The graph of ϕ * i is Suppose we have a convergent sequence (p j , y j ) −→ (p, y) in Γ (ϕ * i ). Let x ∈ X, then with respect to the topology of pointwise convergence. Because the unilateral topology is finer than the topology of pointwise convergence, we are done.
The final result we need is the ability to lift selection functions to mixed extensions.
Lemma 5.6. Let X be a non-empty finite set and let ϕ ∈ S R X be a total quantifier attained by the selection function ε ∈ J R X. Then there exists a selection function ε * such that ϕ * is attained by ε * .
Proof. We define ε * ∈ J R |X| by the equation where δ : X → |X| . Then for all p ∈ |X| → R.   7 (existence theorem for finite games). Let G be a finite game such that each ϕ i is attained by a selection function. Then G has a mixed strategy generalized Nash equilibrium.
Proof. G * is a multilinear game which is attained by selection functions by lemmas 5.5 and 5.6. Therefore, G * has a generalized Nash equilibrium by theorem 4.13.
In order to recover the classical Nash theorem, we simply consider finite games whose outcome spaces are R and define q i (π ) ∈ R to be the utility of π for player i, taking all quantifiers to be max. We could instead define a finite game with single outcome space R n and let (q(π )) i be the utility of π for player i, and consider selection functions ε i ∈ J R n X i maximizing the ith coordinate: However, the quantifiers attained by these selection functions have closed graph only if n = 1. This game has the same equilibria as the equivalent game with multiple outcome spaces, but the Nash theorem cannot be proved in this way. It is for this reason that we need to consider games with multiple outcome spaces, in contrast to generalized sequential games (which do not require continuity).
For a different example of a quantifier in a finite game, let R i be normed and fix ε > 0 and x 0 ∈ X i . Define that is, the closed ε-ball around p(x 0 ). This quantifier is attained by the constant selection function ε(p) = x 0 . For a sequential game this would force the game to be trivial, but this is not the case here: for example, if ϕ X is the quantifier defined here and ϕ Y is the maximum quantifier with R Y = R then a Nash equilibrium is a point (a, b) such that It is hard to find an intuition for a game using this quantifier, but it serves to show that arg max and arg min are not the only quantifiers satisfying definition 5.1, which in turn shows that the generalization from Nash's theorem to theorem 5.7 is not vacuous.

The normal form of a sequential game
In classical game theory, every game can be put into the form of a simultaneous game called its normal form. The major motivation for defining generalized simultaneous games was to generalize this operation to give a notion of normal form for generalized sequential games. This construction is given in this section and a form of soundness of proved, namely that the solution concept for a generalized sequential game, the so-called optimal strategies, are mapped to generalized Nash equilibria.

Definition 6.1 (generalized sequential game).
A generalized sequential game with n rounds is determined by a set R of outcomes, a set X i of moves and a quantifier ϕ i ∈ S R X i , for each 1 ≤ i ≤ n, and an outcome function q ∈ n i=1 X i → R. A strategy in a sequential game is a tuple This definition of strategy gives the dynamic structure of a sequential game: now the component π i of a strategy for the ith round of a sequential game is a choice of move for each possible partial play of the game up to round i. A partial play of the game is any tuple a = (a 1 , . . . , a i−1 ) = i−1 j=1 X j , where 1 ≤ i ≤ n. Given a strategy π and a partial play a = (a 1 , . . . , a i−1 ) ∈ i−1 j=1 X j , we define the strategic extension of a by π as The strategy π is called optimal iff for all partial plays a = (a 1 , . . . , a i−1 ) we have Given a strategy π in a game, its strategic play π † ∈ n i=1 X i is given by the strategic extension by π of the empty partial play (that is, taking i = 1 in the definition of a partial play). The strategic play of an optimal strategy is called an optimal play.
To be precise, this notion of sequential game is called a finite game with multiple optimal outcomes in [1]. Infinite sequential games are avoided in this paper because they are not well-behaved in a classical set-theoretic foundation (see §2a).
Generalized sequential games were introduced in order to study a particular higher-order function called the product of selection functions. This is the function given by where a = ε(λx X · q(x, b x )) and b x = δ(λy Y · q(x, y)).
This can be finitely iterated by the recursion Moreover, in certain foundations (although not in classical set theory), this can be extended to products of countably many selection functions. This infinitely iterated product has many interesting and unintuitive properties: for example, it computes witnesses for the axiom of countable choice, and computes exhaustive searches of certain infinite types in finite time [4], both of which popular belief would have is impossible. Every use of the product of selection functions can be seen as the computation of an optimal play for a suitable generalized sequential game. Theorem 6.2. Let G be a generalized sequential game whose quantifiers are total and attained by selection functions ε i ∈ J R X i . Then is an optimal play for G.
This result follows from theorem 5.4 of [1], and is stated in the remark following lemma 5.1 of that paper. It should be read analogously to theorem 5.7: both state that every game of a certain kind has a solution. An important difference is that while theorem 5.7 is non-constructive (relying on the non-constructive Kakutani-Fan-Glicksberg theorem), theorem 6.2 gives a way to compute optimal strategies using the product of selection functions. Now we give the normal form construction and prove that it maps optimal strategies to generalized Nash equilibria. The intuition for this construction is the same as in the classical case: instead of players taking turns choosing moves, rather they simultaneously choose contingent strategies for the entire game. The outcome functions of the game are altered so that they first 'play out' the chosen strategy profile to produce a play of the game, which can then be used to produce an outcome. 6.3 (normal form). Let G be a generalized sequential game. We define a simultaneous game with single outcome space

Definition
called the normal form of G as follows: Theorem 6.4. Let G be a sequential game and let π be an optimal strategy for G. Then π is a generalized Nash equilibrium of G † .
Proof. Let 1 ≤ i ≤ n, where n is the number of rounds of G. It must be proved that Because π is an optimal strategy for G, we have q( a, b a i , . . . , b a n ) ∈ ϕ i (λx X i · q( a, x, b a,x i+1 , . . . , b a,x n )) by definition 6.1. By induction on j, we have We also have where τ = π(i → λσ P i · x). By induction on j, we have We certainly have that τ † coincides with ( a, x, b a,x i+1 , . . . , b a,x n ) at indices 1 ≤ j ≤ i. Moreover, by induction on i < j ≤ n, we have b a,x j = π j ( a, x, b a,x  i+1 , . . . , b a = ( a, x, b a,x  i+1 , . . . , b a,x  n ).
From the assumption that π is an optimal strategy of G we have, therefore, deduced that is, The converse is false because optimal strategies of sequential games generalize the classical notion of subgame-perfect equilibrium, which is a stronger condition than classical Nash equilibrium (called an equilibrium refinement in classical game theory; [15]).

Two-player games and minimax strategies
In this section, the abstract notion of a ψ-ϕ strategy is defined, and used to show an intriguing connection between generalized simultaneous games and proof theory. The reason for this terminology is that a ψ-ϕ strategy corresponds to a minimax strategy, that is a strategy where each player plays so as to minimize their maximum loss, in a two-player game with ϕ = max and ψ = min. Note, however, that when modelling a classical game as a generalized game, all the quantifiers will be max, and so ψ-ϕ strategies are distinct in this sense from minimax strategies.  1 (ψ-ϕ strategy). Let G be a two-player game with outcome functions q X ∈ X × Y → R X , q Y ∈ X × Y → R Y and quantifiers ϕ ∈ S R X X, ψ ∈ S R Y Y. A strategy a ∈ X is called a ψ-ϕ strategy for player 1 iff q X (a, f (a)) ∈ ϕ(λx X · q X (x, f (x))), for all f ∈ X → Y with the property that for all x ∈ X, q Y (x, f (x)) ∈ ψ(λy Y · q Y (x, y)).
Notice that the type of⊗ is the same as the type of ⊗. Moreover, when infinitely iterated (again in certain non-classical foundations) both provide proof interpretations (in the modified realizability interpretation of Heyting arithmetic) of the axiom of countable choice [16] and a certain equivalence, namely interdefinability over system T, is shown in [17]. However, the relationship between ⊗ and⊗ is not well understood, and only ⊗ has previously been linked to game theory. Let G be a two-player game with single outcome space R, outcome function q ∈ X × Y → R, and total single-valued quantifiers ϕ, ψ attained by selection functions ε ∈ J R X, δ ∈ J R Y. Then the strategy profile (ε⊗δ)(q) ∈ X × Y is a ψ-ϕ strategy profile.
The first component in the strategy profile is a = ε(λx X · q(x, b x )).
In particular, two-player zero-sum classical games are of this form. In a zero-sum game the outcome is a single real number given by an outcome function q ∈ X × Y → R, which the first player tries to maximize and the second tries to minimize. The strategy profile given by (arg max⊗ arg min)(q) is a minimax strategy, that is, each player's component is chosen to minimize their maximum loss.