In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the distribution function of the data may not be known, and therefore maximum likelihood estimation is not applicable.
Contents
- Description
- Consistency
- Asymptotic normality
- Efficiency
- Implementation
- SarganHansen J test
- Scope
- Implementations
- References
The method requires that a certain number of moment conditions were specified for the model. These moment conditions are functions of the model parameters and the data, such that their expectation is zero at the true values of the parameters. The GMM method then minimizes a certain norm of the sample averages of the moment conditions.
The GMM estimators are known to be consistent, asymptotically normal, and efficient in the class of all estimators that do not use any extra information aside from that contained in the moment conditions.
GMM was developed by Lars Peter Hansen in 1982 as a generalization of the method of moments, which was introduced by Karl Pearson in 1894. Hansen shared the 2013 Nobel Prize in Economics in part for this work.
Description
Suppose the available data consists of T observations {Yt } t = 1,...,T, where each observation Yt is an n-dimensional multivariate random variable. We assume that the data come from a certain statistical model, defined up to an unknown parameter θ ∈ Θ. The goal of the estimation problem is to find the “true” value of this parameter, θ0, or at least a reasonably close estimate.
A general assumption of GMM is that the data Yt be generated by a weakly stationary ergodic stochastic process. (The case of independent and identically distributed (iid) variables Yt is a special case of this condition.)
In order to apply GMM, we need to have "moment conditions", i.e. we need to know a vector-valued function g(Y,θ) such that
where E denotes expectation, and Yt is a generic observation. Moreover, the function m(θ) must differ from zero for θ ≠ θ0, or otherwise the parameter θ will not be point-identified.
The basic idea behind GMM is to replace the theoretical expected value E[⋅] with its empirical analog—sample average:
and then to minimize the norm of this expression with respect to θ. The minimizing value of θ is our estimate for θ0.
By the law of large numbers,
where W is a positive-definite weighting matrix, and
Under suitable conditions this estimator is consistent, asymptotically normal, and with right choice of weighting matrix
Consistency
Consistency is a statistical property of an estimator stating that, having a sufficient number of observations, the estimator will converge in probability to the true value of parameter:
Necessary and sufficient conditions for a GMM estimator to be consistent are as follows:
-
W ^ T → p W , where W is a positive semi-definite matrix, -
W E [ g ( Y t , θ ) ] = 0 only forθ = θ 0 , - The set of possible parameters
Θ ⊂ R k -
g ( Y , θ ) is continuous at each θ with probability one, -
E [ sup θ ∈ Θ ∥ g ( Y , θ ) ∥ ] < ∞ .
The second condition here (so-called Global identification condition) is often particularly hard to verify. There exist simpler necessary but not sufficient conditions, which may be used to detect non-identification problem:
In practice applied econometricians often simply assume that global identification holds, without actually proving it.
Asymptotic normality
Asymptotic normality is a useful property, as it allows us to construct confidence bands for the estimator, and conduct different tests. Before we can make a statement about the asymptotic distribution of the GMM estimator, we need to define two auxiliary matrices:
Then under conditions 1–6 listed below, the GMM estimator will be asymptotically normal with limiting distribution:
Conditions:
-
θ ^ - The set of possible parameters
Θ ⊂ R k -
g ( Y , θ ) is continuously differentiable in some neighborhood N ofθ 0 -
E [ ∥ g ( Y t , θ ) ∥ 2 ] < ∞ , -
E [ sup θ ∈ N ∥ ∇ θ g ( Y t , θ ) ∥ ] < ∞ , - the matrix
G ′ W G is nonsingular.
Efficiency
So far we have said nothing about the choice of matrix W, except that it must be positive semi-definite. In fact any such matrix will produce a consistent and asymptotically normal GMM estimator, the only difference will be in the asymptotic variance of that estimator. It can be shown that taking
will result in the most efficient estimator in the class of all asymptotically normal estimators. Efficiency in this case means that such an estimator will have the smallest possible variance (we say that matrix A is smaller than matrix B if B–A is positive semi-definite).
In this case the formula for the asymptotic distribution of the GMM estimator simplifies to
The proof that such a choice of weighting matrix is indeed optimal is often adopted with slight modifications when establishing efficiency of other estimators. As a rule of thumb, a weighting matrix is optimal whenever it makes the “sandwich formula” for variance collapse into a simpler expression.
Implementation
One difficulty with implementing the outlined method is that we cannot take W = Ω−1 because, by the definition of matrix Ω, we need to know the value of θ0 in order to compute this matrix, and θ0 is precisely the quantity we do not know and are trying to estimate in the first place. In the case of Yt being iid we can estimate W as
Several approaches exist to deal with this issue, the first one being the most popular:
Another important issue in implementation of minimization procedure is that the function is supposed to search through (possibly high-dimensional) parameter space Θ and find the value of θ which minimizes the objective function. No generic recommendation for such procedure exists, it is a subject of its own field, numerical optimization.
Sargan–Hansen J-test
When the number of moment conditions is greater than the dimension of the parameter vector θ, the model is said to be over-identified. Over-identification allows us to check whether the model's moment conditions match the data well or not.
Conceptually we can check whether
Formally we consider two hypotheses:
Under hypothesis
where
Under the alternative hypothesis
To conduct the test we compute the value of J from the data. It is a nonnegative number. We compare it with (for example) the 0.95 quantile of the
Scope
Many other popular estimation techniques can be cast in terms of GMM optimization: