Girish Mahajan (Editor)

Autocovariance

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In probability theory and statistics, given a stochastic process X = ( X t ) , the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. With the usual notation E  for the expectation operator, if the process has the mean function μ t = E [ X t ] , then the autocovariance is given by

Contents

C X X ( t , s ) = cov ( X t , X s ) = E [ ( X t μ t ) ( X s μ s ) ] = E [ X t X s ] μ t μ s ,

where t and s are two time periods or moments in time.

Autocovariance is closely related to the more commonly used autocorrelation of the process in question.

In the case of a multivariate random vector X = ( X 1 , X 2 , . . . , X n ) , the autocovariance becomes a square n by n matrix, C X X , with entry i , j given by C X i X j ( t , s ) = cov ( X i , t , X j , s ) and commonly referred to as the autocovariance matrix associated with vectors X t and X s .

Weak stationarity

If X(t) is a weakly stationary process, then the following are true:

μ t = μ s = μ for all t, s

and

C X X ( t , s ) = C X X ( s t ) = C X X ( τ )

where τ = | s t | is the lag time, or the amount of time by which the signal has been shifted.

Normalization

When normalizing the autocovariance, C, of a weakly stationary process with its variance, C X X ( 0 ) = σ 2 , one obtains the autocorrelation coefficient ρ :

ρ X X ( τ ) = C X X ( τ ) σ 2

with 1 ρ X X ( τ ) 1 .

Properties

The autocovariance of a linearly filtered process Y t

Y t = k = a k X t + k

is

C Y Y ( τ ) = k , l = a k a l C X X ( τ + k l ) .

References

Autocovariance Wikipedia