I still can’t believe I was in Ohio for years before I came across the Ohio Performance Evaluators’ Group.
When I attended my first OPEG event in May of 2019, I realized I was among my people. At this conference at Otterbein University, I met dozens of evaluators from across the state, people like me who thought policy and programs should be guided by evidence, not knee-jerk assumptions or ideology. People who believed that the best that social science has to offer us can help us improve lives by giving us insight into what works in the government and social sectors.
There was one strange difference I found between the work I was doing and what OPEG members did, though. While I called what I did analysis, what many of my new companions called their work was evaluation.
Analysis and evaluation are close cousins, but not synonymous with one another. Below are some of the biggest distinctions between the two approaches.
Forward vs Backward
While both analysis and evaluation are ultimately trying to help policymakers make better decisions, analysis is focused primarily on a pending decision while policy evaluation focuses on a policy or program that is already in place.
Policy analysis is often conducted on a policy that has not been implemented yet, for instance whether or not the state of Ohio should legalize sports betting. An evaluation of such a policy would have to be conducted while the policy is being implemented or retrospectively using data that was collected during the implementation of the policy.
Because of this difference, analysis often is focused on “projection”— what a layman might call “predicting the future.” Policy evaluation, on the other hand, is focused on whether a current policy is working or whether a past policy worked.
Microeconomics vs Econometrics
Because of this distinction, analysis and evaluation use two different toolkits. A rigorous form of policy analysis such as cost-benefit analysis is heavily rooted in microeconomic analytical techniques, such as models for supply and demand or the theory of the firm.
Evaluators, on the other hand, use data available from implementation of a policy to estimate the impact of the policy on a population. This makes evaluators focused heavily on the effectiveness criteria: how well was a policy or a program able to bring about the results it wished to bring about? Evaluators are keyed into the elements of randomization, quasiexperimental methodology, and pre/post data in a way that analysts are only interacting with secondarily.
Internal vs External Validity
Evaluators are, at their core, focused on the evaluation of a program. Thus, the internal validity of their work is very important: how can they prove they approached a program with an objective eye and designed an evaluation that did not presuppose its own results?
Analysts, on the other hand, are much more interested in external validity. They ask the question of how they can take analogous policies in other places and use their results to project what the impact of a given policy would be in a certain place.
Analysis and evaluation are cousins, but understanding the difference between them helps someone interested in the process of evidence-based policymaking and programming understand how they fit together to make better policy and programs. That being said, good analysis draws from good evaluations and good evaluations ask questions asked by past analyses. Analysts and evaluators both have an important part to play in making evidence-based policymaking a reality, which will ultimately mean a stronger economy, lower poverty and inequality, and better lives for the general population.