Girish Mahajan (Editor)

Kolmogorov's inequality

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The inequality is named after the Russian mathematician Andrey Kolmogorov.

Contents

Statement of the inequality

Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0,

Pr ( max 1 k n | S k | λ ) 1 λ 2 Var [ S n ] 1 λ 2 k = 1 n Var [ X k ] ,

where Sk = X1 + ... + Xk.

The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.

Proof

The following argument is due to Kareem Amin and employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence S 1 , S 2 , , S n is a martingale. Without loss of generality, we can assume that S 0 = 0 and S i 0 for all i . Define ( Z i ) i = 0 n as follows. Let Z 0 = 0 , and

Z i + 1 = { S i + 1  if  max 1 j i S j < λ Z i  otherwise

for all i . Then ( Z i ) i = 0 n is also a martingale. Since S i S i 1 is independent and mean zero,

i = 1 n E [ ( S i S i 1 ) 2 ] = i = 1 n E [ S i 2 2 S i S i 1 + S i 1 2 ] = i = 1 n E [ S i 2 2 ( S i 1 + S i S i 1 ) S i 1 + S i 1 2 ] = i = 1 n E [ S i 2 S i 1 2 ] 2 E [ S i 1 ( S i S i 1 ) ] = E [ S n 2 ] E [ S 0 2 ] = E [ S n 2 ] .

The same is true for ( Z i ) i = 0 n . Thus

Pr ( max 1 i n S i λ ) = Pr [ Z n λ ] 1 λ 2 E [ Z n 2 ] = 1 λ 2 i = 1 n E [ ( Z i Z i 1 ) 2 ] 1 λ 2 i = 1 n E [ ( S i S i 1 ) 2 ] = 1 λ 2 E [ S n 2 ] = 1 λ 2 Var [ S n ]

by Chebyshev's inequality.

This inequality was generalized by Hájek and Rényi in 1955.

References

Kolmogorov's inequality Wikipedia