# Generalized Nash Equilibrium

## Introduction

This post is about the notion of generalized Nash equilibrium, a somewhat controversial extension of the usual equilibrium concept in game theory, that nevertheless leads to interesting algorithmic questions. It was introduced, under the name of "abstract economy", by Arrow and Debreu [1], in order to describe abstract market equilibrium problems. A limitation of standard Nash equilibrium is the assumption that possible strategies of a given player do not depend on the strategies of other players (although their utility does, of course). This may seem like a minor problem, since we can assign very low utilities to strategy choices that are “impossible” - except that, by doing so, we may create artificial Nash equilibria, where players are stuck with these impossible choices and cannot improve. Generalized Nash equilibrium sidesteps this problem by allowing the set of possible responses of a player to depend on the strategies of other players.

A strong argument against this kind of extension is that we give up one of the fundamental assumptions in game theory - that players may choose their strategies independently of each other. Indeed, if we consider a game to be a model of a real-world situation where choices are made simultaneously, without knowing the choices of the other players, then, clearly, the set of possible choices of a player should not depend on choices made by others. So how can we justify that generalized Nash equilibrium makes sense at all?

I think the best argument is that the ability of Nash equilibrium to model and predict behaviour in real-world games is questionable anyway. The reason is that equilibria can be quite unstable; this can be illustrated for example by a modified game of wikipedia:Matching pennies, due to Aumann and Maschler. In the standard matching pennies game, both players choose heads or tails; the first player gets both pennies if they match; otherwise, the second player wins both of them. It is easy to see that the unique Nash equilibrium is when both players just flip the coin, i.e. they choose heads or tails with equal probability. Now we modify the game the following way: the money transfer between the players remains the same, but the first player wins a huge additional prize from the bank if both players choose heads. It is useful to pause for a moment and think about how you would play this game as first or second player.

Interestingly, the Nash equilibrium of the modified game still requires the first player to choose heads and tails with equal probability; this can be seen by observing that if the first player chooses heads with probability more than ½, then the best response of the second player is to choose tails. However, the second player has a much higher probability for tails than for heads in the Nash equilibrium. A simple argument shows that this is not a stable outcome if the game is played repeatedly. Indeed, suppose that the two players play according to the Nash equilibrium in a repeated game. Since player 1 chooses uniformly, player 2 has expected utility 0 for any strategy. What happens if player 2 slightly increases the probability of heads? First, there is no risk in this adjustment, since player 1 is not motivated to increase the probability of tails. Second, player 1 would realize after a while that he can increase his utility by choosing heads more frequently. This would also be beneficial for player 2, since he still chooses tails with high probability. Thus, a rational player 2 would deviate from the Nash equilibrium, hoping that his move would convince player 1 to deviate, too.

This example, along with a lot of others, suggests that Nash equilibrium is not necessarily a good prediction of actual real-world behaviour. It is still a meaningful equilibrium concept, but only under the assumption that players do not make predictions about other players’ reactions to their strategy changes. In a sense, Nash equilibrium is not about stable outcomes in full-information games, but about the stability of outcomes if players are unsure of other players’ motivations. Furthermore, to investigate this kind of stability, there is no need to know anything about the process that leads to a particular outcome, since we are only interested in possible responses to a given outcome. Generalized Nash equilibrium takes precisely this point of view: there is no need to define a "game mechanism" leading to a feasible outcome, it is enough to define the utility of possible responses.

## Definition and examples

After the lengthy introduction, it is time to give a precise definition of generalized Nash equilibrium. Let us assume that strategies of player i are modeled by a $d_i$-dimensional vector over the reals, and let $d=\sum d_i$. Given a vector $v \in \mathbb{R}^{d}$, $v_i$ denotes the $d_i$-dimensional vector corresponding to player i, while $v_{-i}$ denotes the $(d-d_i)$-dimensional vector of the other players. For every $i$, there is a set-valued function $F_i: \mathbb{R}^{d-d_i} \to \mathcal{P}(\mathbb{R}^{d_i})$ and a utility function $u_i: \mathbb{R}^{d} \to \mathbb{R}$. The set $F_i(v_{-i})$ is the set of feasible responses of player i to $v_{-i}$. A vector v is feasible for player i if $v_i \in F_i(v_{-i})$, and v is globally feasible if it is feasible for all players. A globally feasible vector v is a generalized Nash equilibrium if for every i, $v_i$ is a maximizer of $u_i(x,v_{-i})$ on the set $F_i(v_{-i})$.

There are several things to note about the definition. First, there is no distinction made between pure and mixed strategies; instead, it is usually assumed that the sets $F_i(v_{-i})$ are convex and closed, and $u_i$ is concave. Second, $F_i(v_{-i})$ can be empty, which implies that generalized Nash equilibrium is not guaranteed to exist, and there is a lot of research focusing on suitably general settings where it does (for a survey, see [2]). Third, there is a very natural and important special case called constrained game, studied e.g. by Rosen [3]. A constrained game features a compact and convex feasible set $F \subseteq \mathbb{R}^{d}$, and $F_i$ is defined by $F_i(v_{-i})=\{x \in \mathbb{R}^{d_i}: (x,v_{-i})\in F\}$. Rosen proved that if the utilities are continuous and concave in $v_i$ for fixed $v_{-i}$, then an equilibrium always exists. The proof is a simple application of Kakutani’s fixed point theorem. We define a set-valued function $\varphi$ from F to subsets of F. For $v \in F$, let $\varphi(v)$ consist of the vectors $y \in F$ that are maximizers of $\sum_i u_i(y_i,v_{-i})$. It can be seen that $\varphi(v)$ satisfies the conditions of Kakutani’s theorem, and a fixed point is an equilibrium of the constrained game.

To illustrate the additional flexibility provided by generalized Nash equilibrium compared to standard games and constrained games, we consider a type of network service game. It is played on a network where every arc has a capacity and a price, and each player owns a given share of each arc. Each player has a separate multicommodity flow problem, and his income depends on the proportion of the demands that he can satisfy. When players use an arc to route their flows, they have to pay the owners of the arc proportionally to their share. For example, if two players own equal shares of an arc of capacity 2, and one of them routes a flow of size 1 on the arc, then he has to pay one fourth of the price of the arc to the other player (formally, he also has to pay one fourth of the price to himself).

In a standard game, the strategies of a player are the possible multicommodity flows. This means that for some strategy choices, capacities are exceeded on certain arcs. The natural way to deal with this is to define the income based only on paths that do not contain overused arcs. However, we still have the problem that capacities may be exceeded in a Nash equilibrium.

A more sensible option is to consider the game as a constrained game, where the constraint is that players cannot buy more than the total capacity of each arc. In an equilibrium, individual players cannot increase their profit if they have to respect all arc capacities, given that the other players’ flows are fixed.

This is a reasonable notion of equilibrium, although rather weak; for example, if all arcs are saturated, then equilibrium simply means that players route optimally within the capacities they use. However, using generalized Nash equilibrium, we can define stronger notions of equilibrium. For example, we can say that a solution is feasible for player i if the capacity of an arc is not exceeded by the combined flow of players whose share is at least the share of player i in that arc. This models a situation where players with higher share have higher precedence. In a globally feasible solution, the capacity of each arc is respected because of the feasibility condition for the player who has the smallest share on that arc.

We can also model the situation where precedence on an arc is given to players who do not own the arc. This is a natural requirement in service providing games, where a service provider cannot deny a service from a customer. To model this, we can say that a solution is feasible for player i if the capacities are respected on every arc owned by him.

In contrast to constrained games, these stronger equilibria are not guaranteed to exist, so the question of their existence leads to interesting decision and search problems. We can also ask how a particular game can be changed in order to guarantee stronger equilibria. In the above network problem, for example, it can be shown using LP techniques that arc prices can always be modified in such a way that every socially optimal routing satisfies all of the equilibrium concepts described above [4][5].

## References

1. K.J. Arrow, G. Debreu, Existence of an equilibrium for a competitive economy, Econometrica 22 (1954), 265-290, DOI link
2. F. Facchinei, C. Kanzow, Generalized Nash equilibrium problems, 4OR Volume 5, Issue 3 (2007), 173-210, DOI link, author link
3. J.B. Rosen, Existence and uniqueness of equilibrium points for concave n-person games, Econometrica 33 (1965), 520-534, DOI link, pdf link
4. R. Agarwal, Ö. Ergun, Mechanism design for a multicommodity flow game in service network alliances, Operations Research Letters 36 (2008), 520-524, DOI link, author link
5. T. Király, J. Pap, Complexity of equilibria in linear service-providing games, EGRES Technical Report no. 2012-18