The proposition in probability theory known as the law of total expectation, the law of iterated expectations, the tower rule, the smoothing theorem, and Adam's Law among other names, states that if X is an integrable random variable (i.e., a random variable satisfying E( |X| ) < ∞) and Y is any random variable, not necessarily integrable, on the same probability space, then
Contents
- Example
- Proof in the discrete case
- Proof in the general case
- Notation without indices
- Iterated expectations with nested conditioning sets
- References
i.e., the expected value of the conditional expected value of X given Y is the same as the expected value of X.
The conditional expected value E( X | Y ) is a random variable in its own right, whose value depends on the value of Y. Notice that the conditional expected value of X given the event Y = y is a function of y. If we write E( X | Y = y) = g(y) then the random variable E( X | Y ) is just g(Y).
One special case states that if
Example
Suppose that two factories supply light bulbs to the market. Factory X's bulbs work for an average of 5000 hours, whereas factory Y's bulbs work for an average of 4000 hours. It is known that factory X supplies 60% of the total bulbs available. What is the expected length of time that a purchased bulb will work for?
Applying the law of total expectation, we have:
where
Thus each purchased light bulb has an expected lifetime of 4600 hours.
Proof in the discrete case
Proof in the general case
The general statement of the result makes reference to a probability space
Since a conditional expectation is a Radon–Nikodym derivative, verifying the following two properties establishes the smoothing law:
The first of these properties holds by the definition of the conditional expectation, and the second holds since
In the special case that
Notation without indices
When using the expectation operator
Iterated expectations with nested conditioning sets
The following formulation of the law of iterated expectations plays an important role in many economic and finance models:
where the value of I2 is determined by that of I1. To build intuition, imagine an investor who forecasts a random stock price X based on the limited information set I1. The law of iterated expectations says that the investor can never gain a more precise forecast of X by conditioning on more specific information (I2), if the more specific forecast must itself be forecast with the original information (I1).
This formulation is often applied in a time series context, where Et denotes expectations conditional on only the information observed up to and including time period t. In typical models the information set t + 1 contains all information available through time t, plus additional information revealed at time t + 1. One can then write: