Recently, I have been reading about weak instruments, and how to deal with it. This is mainly because of my involvement with the preparation for the

Econometric Game.

Since I have read a few papers about it, I think that I could give a brief summary about it here. Someone might find it useful. If you would like to have it in pdf,

just click here.

Let's start!

One of the main goals in Econometrics is to identify and make inference about the causal effect of a variable on another. If the covariates are exogenous, this analysis could be carried out using ordinary least squares (OLS) estimation procedures. However, if expect that unobserved factors both some covariates and the dependent variable, this leads to inconsistent OLS estimators, due to the endogeneity problem. Hence, in order to perform causal effect analysis, we should use other methods.

Since endogeneity is a potential problem, we can use other methods to identify the causal e ect, such as GMM, IV, Limited Information Maximum Likelihood (LIML), and Fuller-k estimator. A necessary condition to use such methods is that we have an instrument such that, once controlled for the others covariates, it is uncorrelated with the error term of the structural equation, i.e. the instrument must be exogenous.

Without such condition we cannot identify the causal e ect when we have endogeneity problem. Nonetheless, instrument exogeneity is not su cient to identify the causal e ect: the instrument must also be relevant, i.e. correlated with the endogenous variable. Instruments that ful lled both conditions are called valid. In some cases, though, the instrument is only weakly correlated with endogenous variable, raising the problem of weak instruments . Although we can identify the causal eff ects, in the presence of weak instruments, inference can be misleading. This is because the sampling distribution of IV and GMM statistics, in the presence of weak instruments, are in general nonnormal, asymptotic theory provides us a poor guide of its behavior, and then standard GMM and IV estimates and standard errors are unreliable.

A practical question that arises is how small the correlation should be in order to the instrument be weak. Following

Stock and Yogo (2005), whether a set of instrument is weak or strong depends on the inferential objective of the researcher. They off er two alternatives definitions of weak instruments. The first one is that the bias of the estimator relative to the bias of OLS, could not exceed a given threshold, say 10%.

The second is that instrument are weak if the alpha-level Wald test has an actual size that could exceed a given threshold, say 15%. In the just identified case, with GMM, IV or LIML method, we cannot deal with the weak instrument defi nition based on relative bias since the estimators do not have fi nite rst moment. This is not the case of Fuller-k estimator, though.

Given the de nitions of weak instruments, we would like to test the null hypothesis of weak instrument against the alternative that it is strong.

Stock and Yogo (2005), with the assumption of

* iid* data, propose to use the

Cragg and Donald (1993) statistic, which is the fi rst stage F-statistic that the instruments are zero in the case of single endogenous regressor.

Kleibergen and Paap (2006) proposes a heteroskedastic-robust statistic. In any case, the distribution of such statistics are not pivotal and we have to rely on simulations to tabulate such distribution.

Stock and Yogo (2005) simulated such distribution in the *iid* context, and report the critical values. The case of heteroskedastic residuals has not been tabulated, since diff erent kinds

of heteroskedasticity would lead to di erent distributions. In this case, it is standard practice to compare the Kleibergen and Paap (2006) statistic with the critical values computed by Stock and Yogo (2005).

Since di fferent estimators have di erent properties when instruments are weak, and depending on the number of instruments used, whether or not a instrument is weak will depend on the estimator used.

We know that in case we reject the null hypothesis of weak instrument, and given the data is homoskedastic, Two-Stage-Least-Square (2SLS), are the most e ffcient estimators among the consistent ones. In the case of heteroskedastic data, but strong instruments, the GMM two step estimator is effi cient. However, when instruments are weak, 2SLS and GMM estimators sampling distribution are badly approximated by the normal distribution. In this case, LIML and Fuller-k estimators are more robust - see

Stock et al. (2002) . In fact, Fuller-K estimator is in the class of LIML estimators, but it is the best unbiased estimator up to second order among a broad class of LIML estimators (

Rothenberg (1984) ). Hence, when instruments are weak, Fuller-k estimators would be more reliable than 2SLS ones.

Though Fuller-k estimators are more robust than 2SLS estimators, they are still not fully robust. If one's interest is to make inference about the causal e ffect of the endogenous variable, some fully robust methods such as Anderson and Rubin Wald test (

Anderson and Rubin (1949)), Stock-Wright LM statistic (

Kleibergen (2002) and

Moreira (2001)), and the conditional likelihood ratio developed by

Moreira (2003) are available. However, these tests are developed only when there is one endogenous variable.

Andrews et al. (2008) showed that the CLR test is approximately optimal, and dominates the other tests in terms of power. With this procedure, we are able to perform fully robust inference, even when instruments are weak.

Summarizing, if you detect that you have a weak instrument, the IV and GMM are usually bad options. Fuller-K and LIML are more robust, though. However, they are not fully robust. Then, if you want to test if the endogenous variable affects the dependent variable, you should use the Anderson and Rubin Wald test, Stock-Wright LM statistic or the conditional likelihood ratio.