July 8, 2012

Tuning parameters sucks

Larry Wasserman explain in a very didactic way what are the problems of choosing a tuning parameter.

However, there are cases where the need of tuning parameter disapear once you impose some shape constraints! For example, this is the case of Monotone Increasing Density function, and also the case of log-concave densities! 

I quite like these ideas of imposing shape constraints, specially when it come to economics applications. Several times economic theory indirect impose some constraint on the curve we want to estimate: individuals being risk averse, for example, is a common restriction on the utility function, which is clearly translated so that the nonparametric utility function to be estimate needs to be concave (or quasi-concave). Several other shape constraints appear in economics, so this "extra information" makes our life "tuning parameter free", which is awesome!!!

Problems of how to chose tuning parameters also appear once one is interested in testing procedures. A classical example is how to choose the auto-correlation order,q, to be used in the Box-Pierce test.

For the case of row data, Escanciano and Lobato (2009) propose a very nice way of making the choice of q  data-drive. This idea is later extended to choosing the number of lags for testing for lack-of autocorrelation in a VAR in the work of Escanciano, Lobato and Zhu (2012). I have also applied the same techniques to develop an automatic test for lack of autocorrelation in a time series of counts.

And the Box-Pierce is not the only case where we have to choose a tuning parameter: when you want to check the fit of a regression function, one procedure for doing so is to compare the (semi-) parametric regression function with a nonparametric one. However, one needs to choose the bandwidth in order to estimate the nonparametric regresion.... 

In order to avoid choosing these tunning parameters,  Stute (1997) propose to use the Integrated regression. This is indeed a nice and useful alternative which avoides bandwidth selection.

Similar ideas is pursued by Bierens and Wang (2012), for example.

So, needles to say, I share a lot the concerns of Prof. Wasserman about the pain that is to choose tuning parameters.

More work is needed in this area, I would say. Hope to be able to give some contribution soon!

April 1, 2012

R.I.P Halbert White

He is well known in the Econometrics field specially for the "Heteroskedasticity-Robust Standard errors" (though the first paper to deal with this problem goes back to Eicker (1967) ) and the tests to detect Heteroskedasticity (see White (1980) ).

It is really worth to take a look at his work.

In 2010, White published a paper on duration analysis, an area that I have a particular interest.

Without any doubt, the world lost a great econometrician. Rest in Peace.

February 5, 2012

Shleifer's lessons about transition from communism

In summary, he provides a top-7 list from the transition in East Europe and Russia:

  1. Reformers should not count on an immediate return to growth; economic transformation takes time.
  2. The decline after the reform is not permanent, so do not worry: Capitalism really works.
  3. The decline in GDP has not lead to populism, but in sometimes lead to a new political elite. The lesson is that the reformer should fear the capture of politics by the new elite, not by populism
  4. Economist and reformers do not really know how to sequence a reform, and why one choice is better than other. The lesson is to no over-plan the move to market, and do not delay it in the hope of having a better reform.
  5. Economist overemphasize the importance of incentives, without taking into account that the people in the power might change. The lesson is " you cannot teach an old dog new tricks, even with incentives".
  6. Do not overestimate the long-run consequences of crisis: they do not last that long.
  7. Countries might move to a democracy, but not in the same direct way as they move toward capitalism - Politics evolution is harder to predict than Economics evolution.
I strongly recommend the full article.

February 3, 2012

Weak Instruments - A Brief Summary

Recently, I have been reading about weak instruments, and how to deal with it. This is mainly because of my involvement with the preparation for the Econometric Game.

Since I have read a few papers about it, I think that I could give a brief summary about it here. Someone might find it useful. If you would like to have it in pdf, just click here.

Let's start!

One of the main goals in Econometrics is to identify and make inference about the causal effect of a variable on another. If the covariates are exogenous, this analysis could be carried out using ordinary least squares (OLS) estimation procedures. However, if expect that unobserved factors both some covariates and the dependent variable, this leads to inconsistent OLS estimators, due to the endogeneity problem. Hence, in order to perform causal effect analysis, we should use other methods.

Since endogeneity is a potential problem, we can use other methods to identify the causal e ect, such as GMM, IV, Limited Information Maximum Likelihood (LIML), and Fuller-k estimator. A necessary condition to use such methods is that we have an instrument such that, once controlled for the others covariates, it is uncorrelated with the error term of the structural equation, i.e. the instrument must be exogenous.

Without such condition we cannot identify the causal e ect when we have endogeneity problem. Nonetheless, instrument exogeneity is not su cient to identify the causal e ect: the instrument must also be relevant, i.e. correlated with the endogenous variable. Instruments that ful lled both conditions are called valid. In some cases, though, the instrument is only weakly correlated with endogenous variable, raising the problem of weak instruments . Although we can identify the causal eff ects, in the presence of weak instruments, inference can be misleading. This is because the sampling distribution of IV and GMM statistics, in the presence of weak instruments, are in general nonnormal, asymptotic theory provides us a poor guide of its behavior, and then standard GMM and IV estimates and standard errors are unreliable.

A practical question that arises is how small the correlation should be in order to the instrument be weak. Following Stock and Yogo (2005), whether a set of instrument is weak or strong depends on the inferential objective of the researcher. They off er two alternatives definitions of weak instruments. The first one is that the bias of the estimator relative to the bias of OLS, could not exceed a given threshold, say 10%.

The second is that instrument are weak if the alpha-level Wald test has an actual size that could exceed a given threshold, say 15%. In the just identified case, with GMM, IV or LIML method, we cannot deal with the weak instrument defi nition based on relative bias since the estimators do not have fi nite rst moment. This is not the case of Fuller-k estimator, though.

Given the de nitions of weak instruments, we would like to test the null hypothesis of weak instrument against the alternative that it is strong. Stock and Yogo (2005), with the assumption of iid data, propose to use the Cragg and Donald (1993) statistic, which is the fi rst stage F-statistic that the instruments are zero in the case of single endogenous regressor. Kleibergen and Paap (2006) proposes a heteroskedastic-robust statistic. In any case, the distribution of such statistics are not pivotal and we have to rely on simulations to tabulate such distribution.

Stock and Yogo (2005) simulated such distribution in the iid context, and report the critical values. The case of heteroskedastic residuals has not been tabulated, since diff erent kinds
of heteroskedasticity would lead to di erent distributions. In this case, it is standard practice to compare the Kleibergen and Paap (2006) statistic with the critical values computed by Stock and Yogo (2005).

Since di fferent estimators have di erent properties when instruments are weak, and depending on the number of instruments used, whether or not a instrument is weak will depend on the estimator used.

We know that in case we reject the null hypothesis of weak instrument, and given the data is homoskedastic, Two-Stage-Least-Square (2SLS), are the most e ffcient estimators among the consistent ones. In the case of heteroskedastic data, but strong instruments, the GMM two step estimator is effi cient. However, when instruments are weak, 2SLS and GMM estimators sampling distribution are badly approximated by the normal distribution. In this case, LIML and Fuller-k estimators are more robust - see Stock et al. (2002) . In fact, Fuller-K estimator is in the class of LIML estimators, but it is the best unbiased estimator up to second order among a broad class of LIML estimators (Rothenberg (1984) ). Hence, when instruments are weak, Fuller-k estimators would be more reliable than 2SLS ones.

Though Fuller-k estimators are more robust than 2SLS estimators, they are still not fully robust. If one's interest is to make inference about the causal e ffect of the endogenous variable, some fully robust methods such as Anderson and Rubin Wald test (Anderson and Rubin (1949)), Stock-Wright LM statistic (Kleibergen (2002) and Moreira (2001)), and the conditional likelihood ratio developed by Moreira (2003) are available. However, these tests are developed only when there is one endogenous variable.

Andrews et al. (2008) showed that the CLR test is approximately optimal, and dominates the other tests in terms of power. With this procedure, we are able to perform fully robust inference, even when instruments are weak.

Summarizing, if you detect that you have a weak instrument, the IV and GMM are usually bad options. Fuller-K and LIML are more robust, though. However, they are not fully robust. Then, if you want to test if the endogenous variable affects the dependent variable, you should use the Anderson and Rubin Wald test, Stock-Wright LM statistic or the conditional likelihood ratio.

January 10, 2012

Football statistical analysis

I think that this kind of analysis helps a lot the teams to build up a better team, and check if the wages they pay are worth it somehow. Kind of "Moneyball" analysis. Quite cool!!

I'd like to see this kind of analysis for Brazilian teams and also for Spanish and Italian ones.

If you are interested in more analysis like this one, it is more than worth it to take a deeper look at their blog, Soccer by the Numbers.

December 2, 2011

Shall I use AIC or BIC?

A common question which we usually face in applied econometrics problems is which model is the "best". Several criterias have been suggested such as the Akaike's information criteria (AIC) and Bayesian information criteria (BIC). The minimum the values of these criterias, the better the model. Nonetheless, quite often you select different models using BIC or AIC. So a question that stands out is: Which criteria is better?

In model selection literature, it is know that BIC is consistent in selection when the "true" model is finite dimensional, and the AIC is asymptotic efficient when the "true" model is infinite dimensional. Then the choice would be easy if we know the nature of the "true" model. 

However, in practice, we have no idea what kind of animal we are dealing with: finite or infinite dimensional, and we are still unable to decide which criteria to use.

And here is where a new paper by Liu and Yang, "Parametric or Nonparametric? A Parametricness index (PI) for model selection", published at The Annals of Statistics in 2011, comes to help us.

Liu and Yang (2011) develop a measure, called parametricness index, to check if the model selected by a consistent procedure can be treated as the "true" model.  

In order to understand better what is in fact this Parametricness Index, let's cite Citing Liu and Yang (2011):

"While there are many different performance measures that we can use to assess if one model stands out, following our results on distinguishing between parametric and nonparametric scenarios, we focus on an estimation accuracy measure. We call it parametricness index (PI), which is relative to the list of candidate models and the sample size. Our theoretical results show that this index converges to infinity for a parametric scenario and converges to 1 for a typical nonparametric scenario. Our suggestion is that when the index is significantly larger than 1, we can treat the selected model as a stably standing out model from the estimation perspective. Otherwise, the selected model is just among a few or more equally well-performing candidates. We call the former case practically parametric and the latter practically nonparametric".

So, when the PI is close to 1, several models shares the same properties and it is hard to pic one of them, hence we cannot treat the selected model as the "true" one. Liu and Yang (2011) called this situation as "practically nonparametric". When the PI is significant larger than 1, the selected model is expected to perform better than the others (within the given sample size), and we could treat it as the "true" model - this situation is called "practically parametric". By their numerical exercises, it is suggested that the fact of the true model being parametric or nonparametric do not matter (in finite samples), but what matter is the if we are in a "practically parametric or nonparametric" framework!

Using this idea, Liu and Yang (2011) proposes that whenever the PI is significant bigger than 1 (they use 1.2 as a suggestion), one should use BIC. Otherwise, one should use AIC. 

Ahá!! That's what we were looking for! A date-driven way to choose between AIC and BIC! 

Now the work that has to be done is to code the suggested procedure, so we could use it often! If anyone knows if the code is available somewhere, let me know, please!

Obs: The criteria is developed under linear Gaussian models only, but the results seems promising.

November 9, 2011

Train - Discrete Choice Methods with Simulation

I've just found out that Kenneth Train put the his book Discrete Choice Methods with Simulation on his webpage.

If you are interested, just go for it here.

Actually, Professor Train put all his books available to download!

It is worth to notice this things! Hope someone find it useful as I did.

October 24, 2011

Joke Papers - Calibration

Long time I don´t write something here. Guess this is mainly because I have been quite busy with the thesis and also I had not found something to post here that is better than what I´ve read. Sorry for that!

Now, let´s go to what truly matters.

Last week a friend of mine sent me an hilarious joke paper: Calibrating the world and the world of calibration.

In the paper, they make fun of calibration as an "evolution" of Econometrics, and joke about the ability of explaining everything with it. This is a must read!! Let me copy of paragraph just to convince you to read:

Indeed, today it is difficult to imagine what economists did in the dark ages before the calibration method gained widespread acceptance. A theory put forward by White and Noise (1999) is that some economists were estimating certain economic relationships using matrix algebra and other voodoo rituals instead of simply making them up. Econometricians, as White and Noise call this ancient tribe, also practiced a bizarre virility ritual of exposing their theories to the risk of “falsification”. According to their belief system, econometricians could only avoid falsification by attaching a number of “stars” indicating “significance” to their dearest numbers, and by using 10 point fonts in overhead presentation slides; the latter technique was thought to effectively shield the numbers against the “evil eye” of fellow tribesmen.

This joke paper is on the same line of the Political Econometrics Manifest (in Portuguese), which I quite like!

I know, this is the kind of grad student joke that nobody else find it funny, but I guess you all know me...

August 30, 2011

Real life Numb3rs

Those who follow this blog for a while know that I am a big fan of the TV series Numb3rs, a series where a mathematician solves crime using mathematical models. Unfortunately, the TV show had ended in 2010.

What brings my attention today was a post on the Bayesian Heresy blog. Here I link the original:

In July the Santa Cruz Police Department began experimenting with an interesting bit of software developed by scientists at Santa Clara University. The researchers behind the software are like an intellectual “Oceans Eleven” team of specialists: two mathematicians, an anthropologist and a criminologist. They’ve combined their cerebral forces to come up with a mathematical model that takes crime data from the past to forecast crimes in the future. The basic math is similar to that used by seismologists to predict aftershocks following an earthquake (also a handy bit of software in southern California).

This is just amazing!! Totally Numb3rs! What is even better to see is that it seems the project is working:

Even more impressive, compared to July 2010 burglaries, the number of July 2011 burglaries are down 27 percent. Whether or not that trend holds remains to be seen, but so far it appears that being in the wrong place at the right time works.

Really cool, no?

For those who are interested on the math behind the Numb3rs episodes, I recommend this book and also this webpage!

I am truly looking forward to see "more real life Numb3rs" cases! Hope that in the future I can contribute for some.

August 10, 2011

Do you think all Rating Agencies are equal?

Apparently, they are not!

An S&P ratings seeks to measure only the probability of default. Nothing else matters — not the time that the issuer is likely to remain in default, not the expected way in which the default will be resolved. Most importantly, S&P simply doesn’t care what the recovery value is — the amount of money that investors end up with after the issuer has defaulted.

Moody’s, by contrast, is interested not in default probability per se, but rather expected losses. Default probability is part of the total expected loss — but then you have to also take into account what’s likely to happen if and when a default occurs.

This is something interesting to notice!

Also, from the same report, the author claims:

(...)country which has been downgraded to AA is a worse bet than a country that has been upgraded to AA: the former is much more likely to get another downgrade than it is an upgrade, while the latter is on an upgrade path and is more likely to get another upgrade than a downgrade.

Since I am a bit skeptical about this kind of claims, I search for a paper which calculates the transition matrix for sovereign credit ratings. I found Hu et al (2002) The estimation of transition matrices for sovereign credit ratings, Journal of Banking & Finance.

Even though the time period covered in the paper is from 1981-1998, I try to calculate the transition probabilities.

From table 3, using the SP transition matrix mentioned on the paper (this estimates are based on relative frequency), I calculated that the probability of a country which has been downgraded to AA and suffer another downgrade to A in the next period is 1.1%. The probability that this country goes back to AAA, instead of going to A, is 0.6%.

If you change for the Ordered Probit Model Estimates, the probabilities become 2.6% and 7.7% , inverting the order mentioned on the news above!The conclusion do not change using other tables: whenever you use S&P methodology, the probability of a country which has been downgraded to AA suffer another downgrade is higher than it would go back to AAA, if you use any of the authors´ estimators the direction changes.

One question that naturally arises is if the results hold with more recent data. Does anyone has any suggestion about sit?

Another question is if the model is correctly specified (no autocorrelation of the residuals and no heteroskedasticity: this would lead to INCONSISTENCY!). A robust analysis would be interesting to see.

The third important question is also if the frequency of changes is a good estimate. It is a robust estimator, know, but I think you can do better using the extra information on the covariates.

Hence, the affirmative mentioned on the news is not that obvious and still, there is space to research about it (what about a semiparametric estimator or even a nonparametric one, taking into account the covariates?)

De volta

Depois de muito tempo sem postar, estou de volta!

Queria dizer que devo voltar a postar com mais frequência agora. Entretanto, muitos dos meus posts serão em inglês, pois acho que assim consigo uma audiência maior, além de praticar a escrita!

Acho que isso não será problema para os meus (2 ou 3 ) leitores.

Um abraço a todos!