One of the three main criteria policy analysts use to measure policies is by their effectiveness. In order to do this, we use estimates from academic research to determine the likely effects of a policy change.
When we are looking for estimates to use in our analyses, we often give priority to research that uses sound empirical methods. For instance, we prefer to cite research drawn from randomized control trials or differences in differences approaches to observational studies. This allows us to better estimate whether a policy had an actual effect rather than some other variable that may have caused a change in the population.
These methods broadly represent what is known in economics as the “credibility revolution.”
In short, the credibility revolution refers to the explosion of econometric methods that has happened in recent decades. Gone are the days relying on theoretical models to estimate policy impacts: now we have the tools to understand real data and apply it to problems in economics and public policy.
However, as beneficial as the credibility revolution has been, some argue that we have gone too far and begun to overemphasize the results of unreplicable experiments. Kevin Lang, an economist from Boston University, argues this point in his new working paper.
The key result Lang reports is that by his estimate, 41% of rejected null hypotheses in the economics literature are false rejections. This means that more than two in five economics studies could be reporting incorrect results.
This is an extremely surprising result. We base our policy analyses on results from journals like those Lang studied. If economists are truly coming to that many incorrect conclusions, our final policy analysis estimates could be way off.
One of the main drivers of this conclusion is the fact that in economics, there is very little replication of studies. This is because of practical reasons like the difficulty of finding natural experiments on which to test. Additionally, there is often very little incentive for economists to test each other’s work. Replications rarely get published.
So, what can we do about this problem?
This is a moment where the differences between academics and policy analysts are quite clear. For academics, accurately measuring and reporting results is the most important job. It certainly makes sense for academics to require more stringent guidelines for reporting their findings. This will certainly slow these processes down, but the purpose of academia is truth finding.
Policy analysts and policymakers operate on a much shorter timeline. Because policymaking is subject to political pressures, there are always external considerations when policies are being decided beyond what their expected impact will be.
As a result, our goal as policy analysts is not always to find the best answer, but rather to improve the decision making process. This is not to say that policy analysts don’t have any obligation to the truth, far from it. Instead, our job involves making some prediction, then being honest about the strength of that prediction.
If 41% of rejected null hypotheses in the economic literature are false rejections, that should not exclude those results from being incorporated into a perfectly reasonable policy analysis. We instead should understand that we might have to be more skeptical about our results, and be effective in communicating that skepticism.
The credibility revolution has been an amazing change in the field of economics. The overall quality of the research being done today is still extremely high. Lang’s paper does not decry the entire field of economics, but rather offers a reminder. There is always uncertainty in the work we do, and we need to be aware of and transparent about that uncertainty when communicating the results of our analysis.