Trisha Shetty (Editor)

Slutsky's theorem

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.

Contents

The theorem was named after Eugen Slutsky. Slutsky’s theorem is also attributed to Harald Cramér.

Statement

Let {Xn}, {Yn} be sequences of scalar/vector/matrix random elements.

If Xn converges in distribution to a random element X;

and Yn converges in probability to a constant c, then

  • X n + Y n   d   X + c ;
  • X n Y n   d   c X ;
  • X n / Y n   d   X / c ,   provided that c is invertible,
  • where d denotes convergence in distribution.

    Notes:

    1. The requirement that Yn converges to a constant is important—if it were to converge to a non-degenerate random variable, the theorem would be no longer valid.
    2. The theorem remains valid if we replace all convergences in distribution with convergences in probability (due to this property).

    Proof

    This theorem follows from the fact that if Xn converges in distribution to X and Yn converges in probability to a constant c, then the joint vector (Xn, Yn) converges in distribution to (Xc) (see here).

    Next we apply the continuous mapping theorem, recognizing the functions g(x,y) = x + y, g(x,y) = xy, and g(x,y) = x y−1 as continuous (for the last function to be continuous, y has to be invertible).

    References

    Slutsky's theorem Wikipedia