Samiksha Jaiswal (Editor)

Bayesian vector autoregression

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In statistics, Bayesian vector autoregression (BVAR) uses Bayesian methods to estimate a vector autoregression (VAR). In that respect, the difference with standard VAR models lies in the fact that the model parameters are treated as random variables, and prior probabilities are assigned to them.

Vector autoregressions are flexible statistical models that typically include many free parameters. Given the limited length of standard macroeconomic datasets, Bayesian methods have become an increasingly popular way of dealing with this problem of over-parameterization. The general idea is to use informative priors to shrink the unrestricted model towards a parsimonious naïve benchmark, thereby reducing parameter uncertainty and improving forecast accuracy (see for a survey). A typical example is the shrinkage prior proposed by Robert Litterman, and subsequently developed by other researchers at University of Minnesota, which is known in the BVAR literature as the "Minnesota prior". The informativeness of the prior can be set by treating it as an additional parameter, based on a hierarchical interpretation of the model.

Recent research has shown that Bayesian vector autoregression is an appropriate tool for modelling large data sets.

References

Bayesian vector autoregression Wikipedia


Similar Topics