Kalpana Kalpana (Editor)

Concentration inequality

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value (typically, its expected value). The laws of large numbers of classical probability theory state that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results shows that such behavior is shared by other functions of independent random variables.

Contents

Concentration inequalities can be sorted according to how much information about the random variable is needed in order to use them.

Markov's inequality

Markov's inequality requires only the following information on a random variable X:

  • X accepts only non-negative values.
  • Its expected value E [ X ] is bounded.
  • Then, for every constant a > 0 :

    Pr ( X a ) E ( X ) a .

    Markov's inequality extends to a strictly increasing and non-negative function Φ :

    Pr ( X a ) = Pr ( Φ ( X ) Φ ( a ) ) E ( Φ ( X ) ) Φ ( a ) .

    Chebyshev's inequality

    Chebyshev's inequality requires the following information on a random variable X:

  • The expected value E [ X ] is bounded.
  • The variance Var [ X ] = E [ ( X E [ X ] ) 2 ] is bounded.
  • Then, for every constant a > 0:

    Pr ( | X E [ X ] | a ) Var [ X ] a 2 ,

    or equivalently:

    Pr ( | X E [ X ] | a Std [ X ] ) 1 a 2 .

    Chebyshev's inequality can be seen as a special case of the generalized Markov's inequality when Φ = x 2 .

    Chernoff bounds

    The generic Chernoff bound requires only the moment generating function of X, defined as: M X ( t ) := E [ e t X ] . Based on Markov's inequality, for every t > 0 :

    Pr ( X a ) E [ e t X ] e t a ,

    and for every t < 0 :

    Pr ( X a ) E [ e t X ] e t a .

    There are various Chernoff bounds for different distributions and different values of the parameter t .

    Bounds on sums of independent variables

    Let X 1 , , X n be independent random variables such that, for all i:

    a i X i b i almost surely. c i := b i a i i : c i C

    Let S n be their sum, E n its expected value and V n its variance:

    S n := i = 1 n X i E n := E [ S n ] = i = 1 n E [ X i ] V n := Var [ S n ] = i = 1 n Var [ X i ]

    It is often interesting to bound the difference between the sum and its expected value. Several inequalities can be used.

    1. Hoeffding's inequality says that:

    2. The random variable S n E n is a special case of a martingale, and S 0 E 0 = 0 . Hence, Azuma's inequality can also be used and it yields a similar bound:

    This is a generalization of Hoeffding's since it can handle other types of martingales, as well as supermartingales and submartingales.

    3. The sum function, S n = f ( X 1 , , X n ) , is a special case of a function of n variables. This function changes in a bounded way: if variable i is changed, the value of f changes by at most b i a i < C . Hence, McDiarmid's inequality can also be used and it yields a similar bound:

    This is a different generalization of Hoeffding's since it can handle other functions besides the sum function, as long as they change in a bounded way.

    4. Bennett's inequality offers some improvement over Hoeffding's when the variances of the summands are small compared to their almost-sure bounds C. It says that:

    5. The first of Bernstein's inequalities says that:

    This is a generalization of Hoeffding's since it can handle not only independent variables but also weakly-dependent variables.

    6. Chernoff bounds have a particularly simple form in the case of sum of independent variables, since E [ e t S n ] = i = 1 n E [ e t X i ] .

    For example, suppose the variables X i satisfy X i E ( X i ) a i M , for 1 i n . Then we have lower tail inequality:

    If X i satisfies X i E ( X i ) + a i + M , we have upper tail inequality:

    If X i are i.i.d., | X i | 1 and σ 2 is the variance of X i , a typical version of Chernoff inequality is:

    7. Similar bounds can be found in: Rademacher distribution#Bounds on sums

    Efron–Stein inequality

    The Efron–Stein inequality (or influence inequality, or MG bound on variance) bounds the variance of a general function.

    Suppose that X 1 X n , X 1 X n are independent with X i and X i having the same distribution for all i .

    Let X = ( X 1 , , X n ) , X ( i ) = ( X 1 , , X i 1 , X i , X i + 1 , , X n ) . Then

    V a r ( f ( X ) ) 1 2 i = 1 n E [ ( f ( X ) f ( X ( i ) ) ) 2 ] .

    Dvoretzky–Kiefer–Wolfowitz inequality

    The Dvoretzky–Kiefer–Wolfowitz inequality bounds the difference between the real and the empirical cumulative distribution function.

    Given a natural number n, let X1, X2, …, Xn be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let Fn denote the associated empirical distribution function defined by

    F n ( x ) = 1 n i = 1 n 1 { X i x } , x R .

    So F ( x ) is the probability that a single random variable X is smaller than x , and F n ( x ) is the average number of random variables that are smaller than x .

    Then:

    References

    Concentration inequality Wikipedia


    Similar Topics