Last week, Rob and I were helping lead a workshop on uncertainty analysis for the Society of Benefit Cost Analysis. During this workshop, we discussed different methods that could be used to measure uncertainty, ranging from qualitative discussion about key assumptions to full-on probabilistic analyses that involve Monte Carlo simulations.
One question that the participants of this workshop kept returning to was “if all of these are ways to do uncertainty analysis, how do we know which one is best for our study?”
This question is important not just in uncertainty analysis, but in all aspects of a policy analysis. We always have varying levels of complexity to try and apply, but how are we supposed to decide on what is best for a given problem?
At the core of this question is a tradeoff. We can exchange additional resources, usually time and some prerequisite data, in exchange for more detailed information in our analysis. The question that we as analysts need an answer for is whether the additional information we are getting is worth the additional cost.
One thing that I’ve noticed about this question now that I’ve been a policy analyst for a few years is that often the biggest unknown is about the cost of additional information. It appears to me as though by working on a project, I often have a fairly good understanding about what information we will get from additional work. For example, if I build a model and generally understand what the variance is on some of the inputs to that model, then I have a good intuitive idea of what additional information a Monte Carlo simulation will provide.
The cost of additional information is much more variable and context specific. The two most common barriers I’ve encountered in my work so far are time and data availability.
Usually at Scioto Analysis, we have strict deadlines that we need to budget our time within. We always try to give ourselves some extra time to be flexible at the start of every project, but because that is such a limited resource for us we place a lot of value on it.
In our situation, we need to decide whether the cost of doing a more complex type of analysis is worth more than having more time to write and edit our final report.
The other most common barrier is the cost associated with finding data to use. The majority of our work is based on publicly available data. The Census, the American Community Survey, tax data, these datasets are involved in almost every project we do.
When we need data that isn’t publicly available, or that is difficult to access for whatever reason, the costs of our projects go up substantially. We often don’t have the capacity to try and get non-public data unless it is a clearly defined part of our analysis at the beginning.
There are no specific rules about whether or not additional complexity is worth the costs on a project. Over time, we can get a better understanding of when more research is warranted and when the costs are too high.