Suvarna Garge (Editor)

Continuous game

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

A continuous game is a mathematical generalization, used in game theory. It extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

Contents

In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution. If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.

Formal definition

Define the n-player continuous game G = ( P , C , U ) where

We define Δ i to be the set of Borel probability measures on C i , giving us the mixed strategy space of player i. Define the strategy profile σ = ( σ 1 , σ 2 , , σ n ) where σ i Δ i

Let σ i be a strategy profile of all players except for player i . As with discrete games, we can define a best response correspondence for player i , b i   . b i is a relation from the set of all probability distributions over opponent player profiles to a set of player i 's strategies, such that each element of

b i ( σ i )

is a best response to σ i . Define

b ( σ ) = b 1 ( σ 1 ) × b 2 ( σ 2 ) × × b n ( σ n ) .

A strategy profile σ is a Nash equilibrium if and only if σ b ( σ ) The existence of a Nash equilibrium for any continuous game with continuous utility functions can been proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem. In general, there may not be a solution if we allow strategy spaces, C i 's which are not compact, or if we allow non-continuous utility functions.

Separable games

A separable game is a continuous game where, for any i, the utility function u i : C R can be expressed in the sum-of-products form:

u i ( s ) = k 1 = 1 m 1 k n = 1 m n a i , k 1 k n f 1 ( s 1 ) f n ( s n ) , where s C , s i C i , a i , k 1 k n R , and the functions f i , k : C i R are continuous.

A polynomial game is a separable game where each C i is a compact interval on R and each utility function can be written as a multivariate polynomial.

In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem:

For any separable game there exists at least one Nash equilibrium where player i mixes at most m i + 1 pure strategies.

Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.

A polynomial game

Consider a zero-sum 2-player game between players X and Y, with C X = C Y = [ 0 , 1 ] . Denote elements of C X and C Y as x and y respectively. Define the utility functions H ( x , y ) = u x ( x , y ) = u y ( x , y ) where

H ( x , y ) = ( x y ) 2 .

The pure strategy best response relations are:

b X ( y ) = { 1 , if  y [ 0 , 1 / 2 ) 0  or  1 , if  y = 1 / 2 0 , if  y ( 1 / 2 , 1 ] b Y ( x ) = x

b X ( y ) and b Y ( x ) do not intersect, so there is

no pure strategy Nash equilibrium. However, there should be a mixed strategy equilibrium. To find it, express the expected value, v = E [ H ( x , y ) ] as a linear combination of the first and second moments of the probability distributions of X and Y:

v = μ X 2 2 μ X 1 μ Y 1 + μ Y 2

(where μ X N = E [ x N ] and similarly for Y).

The constraints on μ X 1 and μ X 2 (with similar constraints for y,) are given by Hausdorff as:

μ X 1 μ X 2 μ X 1 2 μ X 2 μ Y 1 μ Y 2 μ Y 1 2 μ Y 2

Each pair of constraints defines a compact convex subset in the plane. Since v is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on

μ i 1 = μ i 2  or  μ i 1 2 = μ i 2

Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on μ i 1 = μ i 2 , it will lie on the whole line, so that both 0 and 1 are a best response. b Y ( μ X 1 , μ X 2 ) simply gives the pure strategy y = μ X 1 , so b Y will never give both 0 and 1. However b x gives both 0 and 1 when y = 1/2. A Nash equilibrium exists when:

( μ X 1 , μ X 2 , μ Y 1 , μ Y 2 ) = ( 1 / 2 , 1 / 2 , 1 / 2 , 1 / 4 )

This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.

A rational pay-off function

Consider a zero-sum 2-player game between players X and Y, with C X = C Y = [ 0 , 1 ] . Denote elements of C X and C Y as x and y respectively. Define the utility functions H ( x , y ) = u x ( x , y ) = u y ( x , y ) where

H ( x , y ) = ( 1 + x ) ( 1 + y ) ( 1 x y ) ( 1 + x y ) 2 .

This game has no pure strategy Nash equilibrium. It can be shown that a unique mixed strategy Nash equilibrium exists with the following pair of probability density functions:

f ( x ) = 2 π x ( 1 + x ) g ( y ) = 2 π y ( 1 + y ) .

The value of the game is 4 / π .

Requiring a Cantor distribution

Consider a zero-sum 2-player game between players X and Y, with C X = C Y = [ 0 , 1 ] . Denote elements of C X and C Y as x and y respectively. Define the utility functions H ( x , y ) = u x ( x , y ) = u y ( x , y ) where

H ( x , y ) = n = 0 1 2 n ( 2 x n ( ( 1 x 3 ) n ( x 3 ) n ) ) ( 2 y n ( ( 1 y 3 ) n ( y 3 ) n ) ) .

This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the cantor singular function as the cumulative distribution function.

References

Continuous game Wikipedia