In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to Chebyshev's inequality as the second Chebyshev's inequality) or Bienaymé's inequality.
Contents
- Statement
- Extended version for monotonically increasing functions
- Proofs
- Proof In the language of probability theory
- In the language of measure theory
- Chebyshevs inequality
- Other corollaries
- References
Markov's inequality (and other similar inequalities) relate probabilities to expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable.
An example of an application of Markov's inequality is the fact that (assuming incomes are non-negative) no more than 1/5 of the population can have more than 5 times the average income.
Statement
If X is a nonnegative random variable and a > 0, then the probability that X is greater than a is less than the expectation of X divided by a;
In the language of measure theory, Markov's inequality states that if (X, Σ, μ) is a measure space, f is a measurable extended real-valued function, and ε > 0, then
This measure theoretic definition is sometimes referred to as Chebyshev's inequality.
Extended version for monotonically increasing functions
If φ is a monotonically increasing function for the nonnegative reals, X is a random variable, a ≥ 0, and φ(a) > 0, then
Proofs
We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader.
Proof In the language of probability theory
For any event
Using this notation, we have
which is clear if we consider the two possible values of
Since
Now, using linearity of expectations, the left side of this inequality is the same as
Thus we have
and since a > 0, we can divide both sides by a.
In the language of measure theory
We may assume that the function
Then
and since
Chebyshev's inequality
Chebyshev's inequality uses the variance to bound the probability that a random variable deviates far from the mean. Specifically:
for any a>0. Here Var(X) is the variance of X, defined as:
Chebyshev's inequality follows from Markov's inequality by considering the random variable
and the constant
for which Markov's inequality reads
This argument can be summarized (where "MI" indicates use of Markov's inequality):
Other corollaries
- The "monotonic" result can be demonstrated by:
- The result that, for a nonnegative random variable X, the quantile function of X satisfies:
Q X ( 1 − p ) ≤ E ( X ) p , the proof usingp ≤ P ( X ≥ Q X ( 1 − p ) ) ≤ M I E ( X ) Q X ( 1 − p ) . - Let
M ⪰ 0 be a self-adjoint matrix-valued random variable and a > 0. ThenP ( M ⋠ a ⋅ I ) ≤ t r ( E ( M ) ) n a . can be shown in a similar manner.