To Combine is Divine

Even as researchers, we are human. Not even machine learning and artificial intelligence can change that. After all, those things are products of our imagination. Well, at least until the machines rise up and we await John Connor to save us.

In the meantime, as humans, we make mistakes. We know this. We appreciate this. We do everything we can to minimize mistakes in our research. But, mistakes happen.



And by mistakes, I am not referring to coding errors, mistakes while merging data sets, spilling coffee on your keyboard or notes, or the countless other ways one can introduce errors into research. Let's not even get started on what happens if research is performed in Excel.

Oh the Horror...(the Scream) by Keith Naquin | Teachers Pay Teachers

No, the mistakes I am thinking about today -- thanks to my lecture this morning -- are more fundamental: model mis-specification. As researchers, we all have had it ingrained into our psyche that the true model is never known. Even nonparametric models only get us so far. Yes, with nonparametric models the functional form is left unknown. But, we still need to choose the covariates and make assumptions about their exogeneity, for example.

In the face of this uncertainty, we must and do proceed. We enter the darkness, in search of the light.



Nonetheless, we know we make mistakes. We know we estimate mis-specified models. And, so we must forgive.
I'm Sorry Greeting Card. To err is human to forgive | Etsy
While forgiveness is divine, there is more we can be doing as researchers to address mis-specification than what is currently done in today's applied research. And, therefore, we perhaps should not forgive so quickly if researchers are letting tools fall by the wayside.

At this point, you might be thinking ... "But, but, but I conduct all the specification tests I can in my research?!"
Drink Coaster Damn I'm Good White Design Novelty Table Ware Tea ...
That's true; you are good. Damn good. You're reading this blog after all! But, could you do even better?

Let's think about how that specification testing typically gets implemented. Let's consider two very common and very similar instances where specification tests are frequently used in empirical (micro) research.

1.  Exogeneity vs. Endogeneity
2.  Random Effects (RE) vs. Fixed Effects (FE)

We had a little back and forth on the first one on twitter the other day, thanks to Khoa Vu.

In this first setting, leaving aside the issue of whether proposed instruments are valid, we specify a model, estimate it by Ordinary Least Squares (OLS) and Instrumental Variables (IV), perform a Hausman test, and then on this basis choose either OLS or IV as our preferred estimator. Similarly, in the second setting, we specify a panel model, estimate it using a RE or FE estimator, perform a Hausman test, and then on this basis choose either RE or FE as our preferred estimator.

South Park Randy Marsh GIF - Find & Share on GIPHY

There are two potential problems with these approaches. First, they introduce the possibility of pre-test bias. For some reason that I am unaware, concerns over pre-test bias are real in time series, but virtually non-existent in empirical micro. The notion of pre-test bias dates to Bancroft (1944).

The idea is simple. When we use a specification test to decide between two estimators, we are really inventing a third estimator. This third estimator can be expressed as

θ-hat(PTE) = I(Fail to Reject) * θ-hat(1) + I(Reject) * θ-hat(2)

where θ-hat(1) and θ-hat(2) are the two competing estimators (say, OLS and IV) and I() is an indicator function. I(Reject) equals one if our specification test rejects the null hypothesis that estimator 1 is preferred; I(Fail to Reject) equals one if I(Reject) equals zero.

Bancroft points out that θ-hat(PTE) is biased. This occurs because the estimator is a weighted average of our two estimators -- where the weights will be 0 or 1 -- but the weights are themselves random variables. Recall, if we have two random variables, X and Y, then E[XY] does not generally equal E[X]E[Y]. So, even if OLS is unbiased under exogeneity, the estimator that chooses OLS after a specification test is not unbiased.


Houston We Have A Problem GIF by memecandy - Find & Share on GIPHY 

The second potential problem is not a problem per se. It's more of an ... opportunity. An opportunity to do better.


There is an opportunity to devise a new estimator that avoids pre-testing bias and outperforms the previous approach based on pre-testing and choosing one of the two estimators. The idea builds on early work by James & Stein (1961). Current applications of this idea are referred to as Stein-like estimators (SL).

Stein-like estimators can be written as

θ-hat(SL) = w*θ-hat(1) + (1-w)*θ-hat(2)

where w∈[0,1] is the weight given to estimator 1. The SL estimator differs from the pre-testing estimator in two significant ways. First, the weights are not 0,1, but rather (typically) between 0 and 1. Second, the weight, w, is (often) chosen a priori and thus is fixed, not random.

The beauty of the SL estimator, and the reason for its recent emergence in some corners of the econometric landscape, is because -- if we chose the weights optimally -- the SL estimator can have a lower mean-squared error (MSE) than the preceding estimators.  

Surely You Cant Be Serious GIFs | Tenor

But, I can!

Judge & Mittelhammer (2004) discuss this general problem. It's still a pretty theoretical paper, but there are simulations and an application. Judge & Mittelhammer (2012) and Hansen (2017) discuss the idea specifically in the context of a weighted average of OLS and IV estimators. Whereas Judge & Mittelhammer (2012) consider weights chosen a priori and thus fixed, Hansen (2017) uses weights that are a function of the Hausman test statistic. Hansen provides R code on his website.

oh crap. did I really just say that? | Make a Meme

Wang et al. (2016) and Huang (2018) discuss the idea specifically in the context of a weighted average of RE and FE. Wang et al. (2016) allow the weights to depend on the Hausman test statistic. Huang (2018) considers several methods of choosing the weights, including the version proposed in Wang et al. (2016) as well as an optimal choice of weights chosen a priori and thus fixed.

In an even more recent paper, Huang et al. (2019) consider the case of a SL estimator where the two individual estimators are FE and Pesaran's Common Correlated Effects Pooled estimator.

Petition · TOGETHER WE UNITE · Change.org

Yes, we never know what the true model is. Yes, most of us do many things to convince ourselves and our readers that we have something close enough. But, no, we don't do everything we could be doing. Applied researchers should heed this new and growing literature imploring us to unite our estimators, instead of pitting them against one other!

While to err is human, to combine is divine!

COME TOGETHER - Moïcani - L'Odéonie

UPDATE (4.8.2020):

Kim & White (2001) provide another application where they consider combinations of OLS and the Least Absolute Deviations (LAD) estimator. In the application, they show its potential to offer superior forecasting ability when analyzing stock returns according to both MSE and mean absolute error (MAE) criteria.

References

Bancroft, T.A. (1944), "On Biases in Estimation Due to the Use of Preliminary Tests of Significance," Annals of Mathematical Statistics, 15(2), 190-204

Hansen, B.E. (2017), "Stein-like 2SLS Estimator," Econometric Reviews, 36(6-9), 840-852

Huang, B. (2018), "Combined Fixed and Random Effects Estimators," Communications in Statistics - Simulation and Computation

Huang, B. T.-H. Lee, and A. Ullah (2019), "Stein-like Shrinkage Estimation of Panel Data Models with Common Correlated Effects," Advances in Econometrics, 40

James, W. and C. Stein (1961), "Estimation with Quadratic Loss," Proc. 4th Berkeley Sympos. Math. Statist. and Prob., Vol. I361-379 

Judge, G.G. and R.C. Mittelhammer (2004), "A Semiparametric Basis for Combining Estimation Problems under Quadratic Loss," Journal of the American Statistical Association, 99(466), 479-487

Judge, G.G. and R.C. Mittelhammer (2012), "A Risk Superior Semiparametric Estimator for Overidentified Linear Models," Advances in Econometrics, 30

Kim, T.-H. and H. White (2001), "James-Stein-Type Estimators in Large Samples With Application to the Least Absolute Deviations Estimator," Journal of the American Statistical Association96(454), 697-705

Wang, Y., Y. Zhang, and Q. Zhouc (2016), "A Stein-like Estimator for Linear Panel Data Models," Economics Letters, 141, 156-161





Popular posts from this blog

There is Exogeneity, and Then There is Strict Exogeneity

Faulty Logic?

Different, but the Same