Samiksha Jaiswal (Editor)

Poisson binomial distribution

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Support
  
k ∈ { 0, …, n }

Parameters
  
p ∈ [ 0 , 1 ] n {\displaystyle \mathbf {p} \in [0,1]^{n}} — success probabilities for each of the n trials

pmf
  
∑ A ∈ F k ∏ i ∈ A p i ∏ j ∈ A c ( 1 − p j ) {\displaystyle \sum \limits _{A\in F_{k}}\prod \limits _{i\in A}p_{i}\prod \limits _{j\in A^{c}}(1-p_{j})}

CDF
  
∑ l = 0 k ∑ A ∈ F l ∏ i ∈ A p i ∏ j ∈ A c ( 1 − p j ) {\displaystyle \sum \limits _{l=0}^{k}\sum \limits _{A\in F_{l}}\prod \limits _{i\in A}p_{i}\prod \limits _{j\in A^{c}}(1-p_{j})}

Mean
  
∑ i = 1 n p i {\displaystyle \sum \limits _{i=1}^{n}p_{i}}

Variance
  
σ 2 = ∑ i = 1 n ( 1 − p i ) p i {\displaystyle \sigma ^{2}=\sum \limits _{i=1}^{n}(1-p_{i})p_{i}}

In probability theory and statistics, the Poisson binomial distribution is the discrete probability distribution of a sum of independent Bernoulli trials that are not necessarily identically distributed. The concept is named after Siméon Denis Poisson.

Contents

In other words, it is the probability distribution of the number of successes in a sequence of n independent yes/no experiments with success probabilities p 1 , p 2 , , p n . The ordinary binomial distribution is a special case of the Poisson binomial distribution, when all success probabilities are the same, that is p 1 = p 2 = = p n .

Mean and variance

Since a Poisson binomial distributed variable is a sum of n independent Bernoulli distributed variables, its mean and variance will simply be sums of the mean and variance of the n Bernoulli distributions:

μ = i = 1 n p i σ 2 = i = 1 n ( 1 p i ) p i

For fixed values of the mean ( μ ) and size (n), the variance is maximal when all success probabilities are equal and we have a binomial distribution. When the mean is fixed, the variance is bounded from above by the variance of the Poisson distribution with the same mean which is attained asymptotically as n tends to infinity.

Probability mass function

The probability of having k successful trials out of a total of n can be written as the sum

Pr ( K = k ) = A F k i A p i j A c ( 1 p j )

where F k is the set of all subsets of k integers that can be selected from {1,2,3,...,n}. For example, if n = 3, then F 2 = { { 1 , 2 } , { 1 , 3 } , { 2 , 3 } } . A c is the complement of A , i.e. A c = { 1 , 2 , 3 , , n } A .

F k will contain n ! / ( ( n k ) ! k ! ) elements, the sum over which is infeasible to compute in practice unless the number of trials n is small (e.g. if n = 30, F 15 contains over 1020 elements). However, there are other, more efficient ways to calculate Pr ( K = k ) .

As long as none of the success probabilities are equal to one, one can calculate the probability of k successes using the recursive formula

Pr ( K = k ) = { i = 1 n ( 1 p i ) k = 0 1 k i = 1 k ( 1 ) i 1 Pr ( K = k i ) T ( i ) k > 0

where

T ( i ) = j = 1 n ( p j 1 p j ) i .

The recursive formula is not numerically stable, and should be avoided if n is greater than approximately 20. Another possibility is using the discrete Fourier transform .

Pr ( K = k ) = 1 n + 1 l = 0 n C l k m = 1 n ( 1 + ( C l 1 ) p m )

where C = exp ( 2 i π n + 1 ) and i = 1 .

Still other methods are described in .

Entropy

There is no simple formula for the entropy of a Poisson binomial distribution, but the entropy is bounded above by the entropy of a binomial distribution with the same number parameter and the same mean. Therefore, the entropy is also bounded above by the entropy of a Poisson distribution with the same mean.

The Shepp–Olkin conjecture, due to Lawrence Shepp and Ingram Olkin in 1981, states that the entropy of a Poisson binomial distribution is a concave function of the success probabilities p 1 , p 2 , , p n . This conjecture was proved by Erwan Hillion and Oliver Johnson in 2015.

References

Poisson binomial distribution Wikipedia