Rahul Sharma (Editor)

Le Cam's theorem

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In probability theory, Le Cam's theorem, named after Lucien le Cam (1924 – 2000), states the following.

Suppose:

  • X1, ..., Xn are independent random variables, each with a Bernoulli distribution (i.e., equal to either 0 or 1), not necessarily identically distributed.
  • Pr(Xi = 1) = pi for i = 1, 2, 3, ...
  • λ n = p 1 + + p n .
  • S n = X 1 + + X n . (i.e. S n follows a Poisson binomial distribution)
  • Then

    k = 0 | Pr ( S n = k ) λ n k e λ n k ! | < 2 i = 1 n p i 2 .

    In other words, the sum has approximately a Poisson distribution and the above inequality bounds the approximation error in terms of the total variation distance.

    By setting pi = λn/n, we see that this generalizes the usual Poisson limit theorem.

    When λ n is large a better bound is possible: k = 0 | Pr ( S n = k ) λ n k e λ n k ! | < 2 ( 1 1 λ n ) i = 1 n p i 2 .

    It is also possible to weaken the independence requirement.

    References

    Le Cam's theorem Wikipedia