Rahul Sharma (Editor)

Entropy power inequality

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In information theory, the entropy power inequality is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary.

Statement of the inequality

For a random variable X : Ω → Rn with probability density function f : Rn → R, the differential entropy of X, denoted h(X), is defined to be

h ( X ) = R n f ( x ) log f ( x ) d x

and the entropy power of X, denoted N(X), is defined to be

N ( X ) = 1 2 π e e 2 n h ( X ) .

In particular, N(X) = |K| 1/n when X is normal distributed with covariance matrix K.

Let X and Y be independent random variables with probability density functions in the Lp space Lp(Rn) for some p > 1. Then

N ( X + Y ) N ( X ) + N ( Y ) .

Moreover, equality holds if and only if X and Y are multivariate normal random variables with proportional covariance matrices.

References

Entropy power inequality Wikipedia