Girish Mahajan (Editor)

Hoeffding's inequality

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value. Hoeffding's inequality was proved by Wassily Hoeffding in 1963.

Contents

Hoeffding's inequality is a special case of the Azuma–Hoeffding inequality, and it is more general than the Bernstein inequality, proved by Sergei Bernstein in 1923. They are also special cases of McDiarmid's inequality.

Special case of Bernoulli random variables

Hoeffding's inequality can be applied to the important special case of identically distributed Bernoulli random variables, and this is how the inequality is often used in combinatorics and computer science. We consider a coin that shows heads with probability p and tails with probability 1 − p. We toss the coin n times. The expected number of times the coin comes up heads is pn. Furthermore, the probability that the coin comes up heads at most k times can be exactly quantified by the following expression:

P ( H ( n ) k ) = i = 0 k ( n i ) p i ( 1 p ) n i ,

where H(n) is the number of heads in n coin tosses.

When k = (pε)n for some ε > 0, Hoeffding's inequality bounds this probability by a term that is exponentially small in ε2n:

P ( H ( n ) ( p ε ) n ) exp ( 2 ε 2 n ) .

Similarly, when k = (p + ε)n for some ε > 0, Hoeffding's inequality bounds the probability that we see at least εn more tosses that show heads than we would expect:

P ( H ( n ) ( p + ε ) n ) exp ( 2 ε 2 n ) .

Hence Hoeffding's inequality implies that the number of heads that we see is concentrated around its mean, with exponentially small tail.

P ( ( p ϵ ) n H ( n ) ( p + ε ) n ) 1 2 exp ( 2 ε 2 n ) .

For example, taking ε = ln n / n gives:

P ( | H ( n ) p n | n ln n ) 1 2 exp ( 2 ln n ) = 1 2 / n 2 .

General case

Let X1, ..., Xn be independent random variables bounded by the interval [0, 1]: 0 ≤ Xi ≤ 1. We define the empirical mean of these variables by

X ¯ = 1 n ( X 1 + + X n ) .

One of the inequalities in Theorem 1 of Hoeffding (1963):

P ( X ¯ E [ X ¯ ] t ) e 2 n t 2

Theorem 2 of Hoeffding (1963) is a generalization of the above inequality when it is known that Xi are strictly bounded by the intervals [ai, bi]:

P ( X ¯ E [ X ¯ ] t ) exp ( 2 n 2 t 2 i = 1 n ( b i a i ) 2 ) P ( | X ¯ E [ X ¯ ] | t ) 2 exp ( 2 n 2 t 2 i = 1 n ( b i a i ) 2 )

which are valid for positive values of t. Here E[X] is the expected value of X. The inequalities can be also stated in terms of the sum

S n = X 1 + + X n

of the random variables:

P ( S n E [ S n ] t ) exp ( 2 t 2 i = 1 n ( b i a i ) 2 ) , P ( | S n E [ S n ] | t ) 2 exp ( 2 t 2 i = 1 n ( b i a i ) 2 ) .

Note that the inequalities also hold when the Xi have been obtained using sampling without replacement; in this case the random variables are not independent anymore. A proof of this statement can be found in Hoeffding's paper. For slightly better bounds in the case of sampling without replacement, see for instance the paper by Serfling (1974).

Proof

In this section, we give a proof of Hoeffding's inequality. The proof uses Hoeffding's Lemma:

Using this lemma, we can prove Hoeffding's inequality. Suppose X1, ..., Xn are n independent random variables such that

P ( X i [ a i , b i ] ) = 1 , 1 i n .

Let

S n = X 1 + + X n .

Then for s, t ≥ 0, Markov's inequality and the independence of Xi implies:

P ( S n E [ S n ] t ) = P ( e s ( S n E [ S n ] ) e s t ) e s t E [ e s ( S n E [ S n ] ) ] = e s t i = 1 n E [ e s ( X i E [ X i ] ) ] e s t i = 1 n e s 2 ( b i a i ) 2 8 = exp ( s t + 1 8 s 2 i = 1 n ( b i a i ) 2 )

To get the best possible upper bound, we find the minimum of the right hand side of the last inequality as a function of s. Define

{ g : R + R g ( s ) = s t + s 2 8 i = 1 n ( b i a i ) 2

Note that g is a quadratic function and achieves its minimum at

s = 4 t i = 1 n ( b i a i ) 2 .

Thus we get

P ( S n E [ S n ] t ) exp ( 2 t 2 i = 1 n ( b i a i ) 2 ) .

Confidence Intervals

Hoeffding's inequality is useful to analyse the number of required samples needed to obtain a confidence interval by solving the inequality in Theorem 1:

P ( X ¯ E [ X ¯ ] t ) e 2 n t 2

The inequality states that the probability that the estimated and true values differ by more than t is bounded by e−2nt2. Symmetrically, the inequality is also valid for another side of the difference:

P ( X ¯ + E [ X ¯ ] t ) e 2 n t 2

By adding them both up, we can obtain two-sided variant of this inequality:

P ( | X ¯ E [ X ¯ ] | t ) 2 e 2 n t 2

This probability can be interpreted as the level of significance α (probability of making an error) for a confidence interval around E [ X ¯ ] of size 2t:

α = P ( X ¯ [ E [ X ¯ ] t , E [ X ¯ ] + t ] ) 2 e 2 n t 2

Solving the above for n gives us the following:

n log ( 2 / α ) 2 t 2

Therefore, we require at least log ( 2 / α ) 2 t 2 samples to acquire ( 1 α ) -confidence interval E [ X ¯ ] ± t .

Hence, the cost of acquiring the confidence interval is sublinear in terms of confidence level and quadratic in terms of precision.

Note that this inequality is the most conservative of the three in Theorem 1, and there are more efficient methods of estimating a confidence interval.

References

Hoeffding's inequality Wikipedia