Is it time for a $15 minimum wage?

When I was living in Nebraska in 2014, the state passed a citizen-initiated minimum wage increase to raise the wage from $7.25 to $9 an hour.

At the time, Nebraska’s minimum wage was the highest in the country after adjusting for local cost of living. Nebraska was on the front end of a series of citizen ballot initiatives passed to expand minimum wages in states across the country, many passing by wide margins.

I was surprised when I moved back to Ohio in 2017 that there was not any active movement to increase the state minimum wage. Ohio is a state with a stronger labor history and presence than Nebraska, so I expected there would be a movement to increase the state minimum wage.

Here we are six years later, and ballot language has finally been approved for a vote on a new minimum wage for Ohio. The new proposal would raise the state minimum wage to $12.75 in 2024 and $15 in 2025 then index it to inflation after that.

Since the current minimum wage is also indexed to inflation, the 2025 minimum wage under current law will probably end up in the $11 an hour range. This means that the hourly minimum wage would be set four dollars higher under the proposal.

Minimum wages have had an interesting history among economists. They are a classic example of a price floor, where prices for labor are not allowed under a certain value. Neoclassical economic theory suggests this should lead to a shortfall in jobs since some companies willing to pay less than the minimum wage will not be able to hire workers willing to work for less than the minimum wage.

Over the past couple of decades, though, many economists have been questioning whether minimum wage increases will necessarily lead to employment decreases.

One situation where minimum wage increases will not lead to unemployment are in competitive labor markets where wages are high. If workers can get jobs basically where they want to and this is driving nearly all wages above the minimum wage rate, then there are very few workers willing to work for lower than the minimum wage.

This could be the case in a place like the Columbus Metropolitan Area, where unemployment is at 3.4% and wages are relatively high.

A problem with this situation is that it also means the minimum wage will not have much of an impact. If few people make below the minimum wage and thus are not likely to lose their jobs, few people are also eligible for higher wages because of the increase.

Another situation is in places where markets are not competitive, particularly monopsonistic labor markets. Monopsony is the opposite of a monopoly: instead of there being one producer of a good, there is only one consumer of a good, in this case labor.

If employers (consumers of labor) have too much market power, they can keep wages artificially low, leading to an inefficient labor market. A minimum wage in this scenario can push wages nearer the level they would be in a competitive labor market.

If Wal-Mart is the only employer in town, they can keep prices for labor lower than they would be in a competitive labor market. They also could raise the price over the minimum wage threshold in order to corner the labor market. These sorts of dynamics could be at work in some of Ohio’s more rural and small-town communities.

While a $15 minimum wage would have been unthinkable in Ohio 20 years ago, it seems pretty pedestrian from a policy standpoint now. Yes, there will be some places where wages will go up, but we’ve seen this policy implemented elsewhere and have not seen mass localized unemployment, suggesting other forces may be at play here.

This commentary originally appeared in the Ohio Capital Journal.

Economists say flat tax proposal will deepen inequality

In a survey released this morning by Scioto Analysis, 18 of 22 economists agreed the flat state income tax of 2.75% proposed by lawmakers would deepen income inequality across the state. 

Curtis Reynolds of Kent State wrote “cutting taxes will certainly not improve inequality, since much of the benefits will be felt by higher income individuals.  On top of that, required cuts to services to balance the budget may disproportionately hurt lower income households.”

David Brasington of the University of Cincinnati who was uncertain about the inequality impacts of the flat tax commented “it depends on local government response, how they change income and property taxes in response.”

Additionally, the majority of economists (12 of 22) think that a flat income tax would not help grow the state economy. Eight more were uncertain about the impacts this would have on the overall economy, and only two believed this would help grow the economy.

“Public services and goods are an important part of the necessary infrastructure to grow an economy. Cutting state income taxes will reduce the public infrastructure. Our current tax rate is very competitive with other states and doesn't need to be reduced,” says Rachel Wilson of Wittenberg University.

More quotes and full survey responses can be found here.

The Ohio Economic Experts Panel is a panel of over 40 Ohio Economists from over 30 Ohio higher educational institutions conducted by Scioto Analysis. The goal of the Ohio Economic Experts Panel is to promote better policy outcomes by providing policymakers, policy influencers, and the public with the informed opinions of Ohio’s leading economists.

Unpacking the biggest change ever in cost-benefit analysis

Earlier this month, the Office of Information and Regulatory Affairs (OIRA) released their first ever proposed revisions to Circular A-4, the document that outlines exactly how cost-benefit analysis is supposed to be conducted at the federal level. Because this defines how cost-benefit analysis needs to be done for federal agencies, any changes to this document will have massive policy implications going forward. 

This document is still open to public comment, so none of these changes are official yet. Academics, professionals, and other stakeholders can still give their thoughts and change some of this official guidance. For now though, let’s take a look at the proposal. What changed, what stayed the same, and what the policy implications might be. 

Analytic Baseline

When we make projections as part of CBA, we often compare the potential future under a particular policy alternative compared to the current day status quo. This requires the assumption that if we do not go down this policy path, the world around us will stay exactly the same.

If you think this sounds like an unreasonable assumption, then congratulations because OIRA agrees with you. 

Going forward, the proposed guidance will be to establish an analytic baseline. In other words, if we are forecasting what will happen with a policy proposal, we need to compare it to a status quo forecast.

This might strengthen the case for preventative policies such as carbon taxes or green energy subsidies where we will expect the status quo situation to get worse over time. Another side effect of this change is that CBA is going to become more analytically intensive going forward. 

The researchers doing these analyses are going to be asked to make more assumptions about the world around them and justify those assumptions empirically. With policies like this, the question is whether the added complexity of the model adds enough useful information to be worth the additional time and uncertainty introduced by the complexity. 

Distributional Analysis 

In CBA, distributional analysis is the process of exploring how the benefits and costs of a policy are distributed across a society. The question distributional analysis is trying to answer is as follows: who is actually going to be paying the costs and who is actually going to be receiving the benefits of the policy in question?

In the current proposed revisions, OIRA has decided to not require agencies to include distributional analysis as part of their work, but instead gives agencies the discretion to choose to include it should they expect significant distributional differences. 

The primary reason behind this decision is that Circular A-4 applies to a wide range of government agencies that all have different goals. Specific guidelines on how to perform distributional analysis may not be appropriate for the range of agencies performing CBA. 

The most important implication of this is that federal CBA is going to continue to largely carry the assumption that costs and benefits are uniformly distributed across the country. For some policies, this might be an appropriate assumption and performing distributional analysis would be a waste of resources. However, policies that specifically target distributional outcomes such as anti-poverty policies should certainly include distributional analysis. 

The onus is on individual agencies to determine whether or not they need to perform distributional analysis. Hopefully they are able to identify when it is appropriate and implement it. 

Discounting

As we’ve talked about before, the question of which discount rate to use still inspires a lot of debate within academic cost-benefit analysis circles. As such, the proposed revisions ask for a lot of comments about the best path moving forward, but for the most part avoid suggesting one specific path is best.

What OIRA did say that was concrete is that interest rates are likely going to continue to be calculated from financial data going forward. This is in contrast to just choosing a discount rate out of thin air, which they claim is ethically problematic. 

One interesting change to discount rates is how OIRA is going to handle discounting for future generations going forward. Under a normal discounting framework, we would expect benefits that accrue for future generations to essentially have no net present value because they get discounted by so much over time.

This raises a lot of ethical concerns about our society's responsibility to future generations that are inherently unable to participate in the current decision making process. As a result, OIRA is proposing to release a table that lists the proper discount rates on a 150 year time horizon, taking into account the fact that we care about the benefits of future generations. 

None of these changes are final yet. These proposed changes will be open for public comment until the first week of May, and it seems like some of these changes might look very different then depending on the input OIRA gets. 

Still, this is the most significant change to federal CBA ever and it will dramatically change the way policy analysis is done. Hopefully these changes improve the quality of policy analysis and in turn, lead to better policy making decisions.

Reading curriculum changes need evaluation

Earlier this month, Education Week reported on a policy trend that Ohio Gov. DeWine has made a central focus of his 2024-2025 budget: reform of reading curriculum standards.

This reform in particular centers around a fulcrum of debate about how to teach reading in schools. In particular, a popular but controversial program called “Reading Recovery” is in the crosshairs of the governor.

Reading Recovery is a program that focuses on one-on-one instruction where a teacher keeps a running list of words the student read incorrectly. The teacher takes notes about what may have tripped the student up on these particular words.

Reading Recovery had a lot of promise out of the gate. A randomized controlled trial of the program in 2010 showed 1st grade participants in Reading Recovery far outpacing their peers in reading skills after five months of instruction.

Subsequent evaluations of the program, however, have cast doubt on its effectiveness. A follow-up evaluation of participants in the program done by the same center that conducted the original evaluation found Reading Recovery participants falling a half grade level below their peers in 3rd and 4th grade reading proficiency tests.

This evaluation as well as others in the field have led researchers to worry that individualized focus helps students in early stages of learning but passes over “foundational” learning. This means that students can learn how to read words that are important for a 1st grader, but these skills do not help students get to the level of 3rd grade reading, and can even be detrimental to that goal.

Some who advocate on behalf of teachers, however, have argued that similar approaches to Reading Recovery like “three-cueing,” an approach to learning that emphasizes context over phonics, should be preserved as an option for teachers.

Education researchers are critical of this sort of approach. Chanda Rhodes Coblentz, an assistant professor of education at the University of Mount Union, called three-cueing “a fancy way of saying we’re allowing kids to guess at words.”

Part of what may appeal to educators about approaches like Reading Recovery is the combination of one-on-one instruction and quick results. In this way, Reading Recovery may be like a keto diet: you get results, you get them fast, but you’re not building the fundamentals needed to make sustainable, long-term progress.

On the other hand, the value of leaving curricular decisions up to teachers is that they can tailor educational experiences to their classroom. Theoretically, Reading Recovery could be a bad program for the average classroom but still a useful program for a subset of classrooms, and teachers could be well-suited for identifying whether it is the right curriculum for their classroom.

If there is an argument for these alternative approaches, we need evidence of their effectiveness. Governor DeWine is seeking $162 million for reading reform efforts, hoping to discourage programs like Reading Recovery and approaches like three-cueing in favor of more evidence-supported curricula.

If defenders of three-cueing are right and these approaches are useful for a subset of students, then let’s test it. The state of Ohio should set aside a small portion of these funds for evaluation of pilots of alternative teaching techniques to see if they work. And these pilots should be evaluated out to the third-grade level if possible to determine if impacts are long-lasting.

Ultimately, we can’t rule out of turn that there is no student for which Reading Recovery or three-cueing will be useful. But if we want to keep these around as options in the face of mounting evidence they are hurting child reading outcomes, we need better evidence of their effectiveness.

This commentary first appeared in the Ohio Capital Journal.

How can we do more equitable policy analysis?

Earlier this week, I attended a webinar on data equity. For an hour, statistician Heather Krause talked about some of her work experiences where her internal biases and assumptions meaningfully changed the results of her analyses and gave some tips for spotting these in future work. 

At Scioto Analysis, we believe that equity should be considered in every policy analysis. The truth is that while equity is always a part of policy adoption, the only thing that changes from an analytic standpoint is whether or not we choose to acknowledge it. 

Consider this example: we have three classrooms, one with three students, one with six students, and one with nine students. What is the average number of students in each class? This is an easy enough calculation, (3+6+9)/3 = 6. As simple as this seems, it actually relies on an important assumption about equity. In this case the variable we are measuring is classroom size

Instead, let's consider things from the students’ perspectives. What is the average class size that a student experiences? In this case, the variable of interest is classroom size for each student. Here, our calculation becomes much larger. If you add up the experiences of all 18 students within these three classrooms, you get (3+3+3+6+6+6+6+6+6+9+9+9+9+9+9+9+9+9)/18 = 7.

Now we have two different conclusions from the same data. Although in this case the results are quite close, we still need to ask ourselves which of these results is more accurate.

This depends entirely on the question we are trying to answer. If our research is about how smaller classroom sizes affect teachers, then saying the average class size is six best reflects how teachers are experiencing classroom size. 

If instead we are trying to measure the effect of class size on students, then the second number better reflects how students are experiencing classroom sizes. 

This example is meant to show that all of our assumptions have equity implications, whether we notice them or not. When I first saw the classroom example, I immediately thought that six was the only correct answer. It did not cross my mind to reframe what variable we were trying to take the average of and how that could possibly influence the equity of the results. 

In this webinar, we also talked about how equity can fit into every part of the analysis process. Is the data being collected in an equitable way? Is the final report being written to discuss the equity implications of your research? Depending on the situation, as analysts we might not be in charge of some of these steps. However, we need to understand how these assumptions influence our results.

The good news is that being careful about including equity in an analysis is almost exactly the same as simply being a good analyst. Identifying assumptions, understanding their implications, and honestly acknowledging them is the core of good analysis. 

In this sense, more equitable analysis is the same as more scientifically rigorous analysis. The difference is that we need to ask more questions about our own internal biases and assumptions as researchers and make sure they are not getting in the way of giving policymakers the answers they need.

Scioto Analysis releases new cost-benefit analysis of 100% tax proposal

This morning, Scioto Analysis released a new analysis of a bill in the Ohio legislature to tax 100% of income of all Ohio residents.

“All in all, we find this bill to have benefits that far exceed the costs,” said Scioto Analysis Principal Rob Moore, “while we know there are sensitive political considerations to passing a bill like this, we hope policymakers will consider the evidence behind the proposal when making the decision to pass this bill.”

Scioto Analysis analyzed the 100% income tax on the dimensions of economic growth, poverty and inequality impact, and impact on health, education, and subjective well-being.

“Yes, our projections suggest that the 100% income tax would reduce the number of dollars in the economy,” said Moore, “but this would free up a lot of time for other pursuits such as sunbathing, catching butterflies, and improvisational comedy. These are all activities that we know have massive benefits for the public from a long line of economic research.”

The latest coverage of the bill is that the proposal is being wrapped into the current budget bill. Members of the Ohio General Assembly are hoping to pass the bill in full before it gets bogged down in public discussion, shooting for a deadline of April 1st.

How to Moneyball state government

In 2017, I read the book Moneyball for the first time and was awestruck. My brother had got it for me as a Christmas present and I could not believe how closely the book dovetailed with the work I was doing as a graduate public policy student at the time.

If you’re not familiar with this book, Moneyball is the story about how the Oakland A’s used data analytics to turn one of the least-resourced baseball programs in the MLB into one of the most competitive on the field. Rather than scouting people based on how tall or fast they were, the A’s used insights from statistics to create algorithms to pick up athletes who were good at getting walks and on base: the fundamentals of advancing runners and winning the game of baseball.

Basically, they found a way to get the best win percentage bang for their salary buck.

As I read this book, I pondered why people hadn’t applied these insights to public policy problems. I knew there was low-hanging policy fruit–policies that are cheap but not sexy that can grow our economy, reduce poverty and inequality, and help people live better lives. Why aren’t they getting attention?

I was happy to find that I wasn’t alone. A group called Results for America publishes their own version of Moneyball going by the straightforward title of Moneyball for Government. The book is a series of essays by officials from both the Bush and Obama administrations about how to make government and its programs more evidence-based.

I was especially drawn to the Afterward, co-written by Obama Administration Office of Management and Budget Director Robert Gordon and former Senior Advisor to President Bush for Welfare Policy Ron Haskins. The chapter is called “A Bipartisan Moneyball Agenda” and includes concrete steps to making the federal government more evidence-based.

We can take some of the suggestions they make and use them to create an agenda for “moneyballing” state government. Below are some suggestions I have for state governments that want to do this.

1. Appoint a Chief Evaluation Officer

If evaluation is going to be a big part of state government, someone needs to be in charge of it, and should be close to the governor or at worst, the governor’s chief budget officer. A chief evaluation officer can provide expert advice to senior executives on how to integrate research into decision making. This can spur appointment of evaluation officers in major agencies as well. By elevating evaluation to the senior level of leadership, it will instill evaluation as an important aspect of how state government policymaking is conducted.

2. Set aside at least 1 percent of each agency’s discretionary funding for evaluation

Agencies should have authority to direct a minimum of 1 percent of their total funds to program evaluation. This authority will help agencies ensure that they do not miss important learning opportunities when they arise. It will also allow agencies to pilot programs, see if they work, and adjust them or eliminate them to free up funding for more promising programs as they arise.

3. Create a comprehensive, easy-to-use database of state program evaluation results available to the public 

Putting all evaluations of state programs online can promote transparency and accountability, inform better decision making, and signal to researchers the importance of using rigorous research and evaluation designs.

4. Institute comprehensive cost-benefit analysis and equity analysis in regulatory and legislative research analyses. 

Regulatory agency review and legislative research offices are the most trusted sources of information for regulators and legislators respectively in crafting policy for the state. Encouraging regulatory and legislative research analysts to quantify and monetize benefits as well as costs of regulation and legislation will give policymakers more information and help them craft policy that is more effective, efficient, and equitable.

Those are just four examples, but if instituted in state governments across the country, they could have a big impact on adoption of policy that works and provides a good return on public investment. As policy chair for the Ohio Program Evaluators’ Group, I am currently working to promote these sorts of initiatives in Ohio’s state government. I hope more people will push for similar reforms in state governments across the country.

What is meta-analysis?

One of the most important limitations of any single research study is that it only truly represents the data that the researcher used. When it comes to extrapolating any results to new data or in a new context, some studies are better than others.

Studies that include techniques like randomized control trials or causal inference methods are better than straightforward observational studies in this regard, but no one study is perfect. 

As evidence of this fact, different researchers often ask the same question but end up finding different answers. This doesn’t mean that everyone is wrong, just that we live in an uncertain world and small changes in the inputs to a research project can have big effects on the outcomes.

But, as anyone who understands statistics knows, taking the average of repeated samples is one of the most effective ways to find the true average value of something. If we consider each individual study of the same question as a sample, then it follows that by averaging the results of all the studies we can more accurately approximate the truth. 

We call this process “meta-analysis.” Meta-analysis is the systematic approach of analyzing the variable results of many studies of the same question. In many ways, meta analysis is very similar to policy analysis. The goal is to synthesize as much information as you can on one topic to find a single answer.

However, because it is a scientific tool, in order for something to truly be considered a meta-analysis it needs to meet certain standards above just comparing the results of similar studies. 

First, meta-analysis requires a complete literature review of the topic of interest. Often researchers performing a meta-analysis will define a search criteria in advance. For example, they might limit themselves to every paper written about the value of recreational fishing in the last 30 years. 

Exactly how to define a good literature search is a topic of open discussion–there is no “industry standard” for what constitutes a good literature review. Some researchers advocate for including every possible study to avoid some selection bias, while others might selectively exclude studies whose methods might have been questionable. Ultimately, the literature review should focus on gathering as much information as possible on the research question at hand.

Once you have a body of research to analyze, the next step is to record some of the key characteristics of each study. Most modern meta-analyses use meta regression techniques to control for key differences between studies. Some examples of variables that get recorded are the year the data was collected, the type of statistical model used, or the nation of origin of the study.

Often it is best practice for multiple people to perform the last step simultaneously. This way, they can make sure that their results are free from an individual's bias. If two researchers read the same paper, and come to different conclusions about what characteristics to record, then they know to go back and take a closer look.

Another important consideration for researchers is publication bias. Publication bias stems from the idea that academics and journal editors have very little incentive to publish papers that don’t find any new interesting results. This is a problem because it is still important for the broader understanding of a subject to test a hypothesis and find out that we were wrong. It just doesn’t make for good reading. 

In the context of meta analysis, publication bias might result in our estimates being biased to be larger and less variable. There are statistical and graphical checks researchers can perform to check for publication bias, but there is no single method to be certain. 

When done correctly, a meta-analysis can synthesize an entire field of research into a much more digestible and applicable format. There is a lot of work that goes into it and there are many pitfalls along the way, but the reward is certainly worth it: we get one step closer to the truth.

Where the policy analyst ends and the policymaker begins

At the Society for Benefit-Cost Analysis annual conference earlier this month, the lunch keynote on the first day focused on the difficulty of doing good policy analysis in the face of conflicting data. One topic that Sherry Glied, Dean of NYU’s Wagner School of Public Service spoke on was the role of the policy analyst as opposed to the role of the policymaker. This is a topic we talk about a lot at Scioto Analysis and I thought it would be valuable for me to share a little bit about how we think about this important relationship.

Policymaking is entirely about understanding tradeoffs. As an analyst, my goal is to use the best available data to make those tradeoffs clear to policymakers in a way that is honest and accurate. 

It then becomes the role of the policymakers to decide which tradeoffs they want to make. In a well functioning democracy, the policymakers will reflect the view of the people they represent as best they can and allocate whatever scarce resources they have. 

This is where policy analysis and policymaking can sometimes conflict. A policymaker might choose to make a decision that does not maximize efficiency, equity, or effectiveness for some reason. Maybe a policy is very efficient only after we count long-term costs, or perhaps it improves equity concerns but would otherwise be inefficient. 

It can be hard as a policy analyst to accept the fact that always maximizing efficiency is an impossible goal. On one hand, it is impossible because in order to be certain that we were maximizing efficiency we would need to research every possible alternative which would lead to a never ending cycle of research. 

More importantly though, efficiency isn’t always the goal of policymakers. Policymakers have to care about things like elections, political feasibility, and legislative rules.

Often, this separation gets painted as a negative consequence of our political system. “If only the policymakers just did what the research suggests, then we’d have a much better society.” But I don’t necessarily think this is the case. 

For one thing, this separation allows policy analysis to (in theory) operate outside of the political discourse. Of course, there are critical assumptions the analyst has to make that can shape the results quite significantly, but good policy analysis is clear about these assumptions and often tests what happens if they do not hold. 

Once assumptions have been established, applying the best available data and methods should result in analysts coming to the same conclusions. Keeping an arms length away from debates about politics allows policy analysis to maintain its status as a scientific discipline that deals with the truth. 

The other reason this separation is important is because policy analysts are often in the business of predicting the future, which is an inherently difficult proposition. Even the most rigorous analyses can sometimes guess incorrectly about what the future will hold. That doesn’t mean the analysis was bad, just that something unexpected happened. 

Statistics give us the tools to measure uncertainty and incorporate it into our estimates, but at the end of the day most policies only get one chance. It might be a more efficient world if the entire political system was centered around maximizing the expected value of our policies, but it might not. 

Separating policy analysts and policymakers protects the integrity of the policy analysis process. It keeps the focus of the analysis on the process instead of the results. For the most part, as long as the analysis was done correctly, the analyst will continue to be trusted. Conversely, policymaking is a much more outcome focused business. If an expensive program doesn’t work as expected, usually the elected official is going to be on the hook. 

All of this is certainly a glass-half-full take of the dynamics between policy analysts and policymakers. The entire reason Scioto Analysis exists is because we don’t think there is enough good policy analysis happening at the state and local levels of government. 

Good policy analysis should be a much larger part of the way our society functions, we’d all be better off for it. However, it can’t be the entirety of our decision making process. Policymaking and politics still play an important role, and they likely always will. And in a democracy, they probably should.

What is the marginal utility of a dollar?

When I was in high school, I took part in a state government simulation called Buckeye Boys State. High school students come from across the country and take part in a mock state government–passing bills, running cities, operating the state bureaucracy.

I ran for the House of Representatives (after a spectacular failure in the state Senate race) and served in this mock legislative body’s minority party.

While I was serving, one of the majority party members put forth a proposition to institute a “flat tax” for Ohio: a tax rate that was exactly the same for everyone. High schoolers throughout the room nodded. After all, why shouldn’t the tax rate be the same for everybody? You still have to pay a higher amount if you make more money: why should the percentage go up, too?

Being a high school debater, I was always taught to try to debate both sides of an issue. After all, if this issue was so cut and dry, why do we have a graduated tax structure (a tax system where people with more income pay a higher percentage on that income) in the first place?

What I came across was an interesting concept: the marginal value of money. The idea is this: as your income increases, the value of extra income decreases. If a family at the poverty level loses 10% of their income, that will have a much more negative impact on their well-being than a family making five times the federal poverty level.

This plays out in a number of the lenses we use to analyze public policy. From a classical economic standpoint, the idea of the marginal utility of income goes back to 19th century economists. From a poverty or inequality analysis perspective, dollars accrued to low income households are more effective at reducing poverty than dollars accrued to upper-income households.

From a capabilities perspective, basic needs for education and health tend to have marginal utility, too. For instance, the difference between getting no regular health screening and any regular health screening generally improves health more than the difference between getting any regular health screening and the most expensive health screenings. And the difference between no college and going to a low-cost state school is much more vast for future outcomes than the difference between going to a low-cost state school and an ivy league college.

This even plays out in research around subjective well-being. Happiness economist Matthew Killingsworth finds that increases in income correlate with increases in self-reported happiness–but that the increases diminish as income grows. So money does make you more happy, but typically you need a lot more money to get just a little more happiness as you move up the income scale.

So how does this matter to an analyst? My graduate school benefit-cost analysis professor Dan Acland presented a paper at the Society for Benefit-Cost Analysis’s annual conference this month on how to factor this insight into benefit-cost analysis.

His general proposal is to separate impacts of a policy into impacts for lower-income and upper-income people and then adjust dollar values into “income-adjusted values.” In Acland’s model, these can be understood as the value of an impact adjusted for if the beneficiaries were at median income.

In practice, this will make policies that accrue monetary benefits for low income individuals yield higher net present value than they would before adjustment. 

Acland’s proposal is radical, but it seems like a useful way to apply a rigorous equity lens to policies that are likely to have disparate impacts to households across the income spectrum. And we could certainly use better tools to understand the equity impacts of public policy in the United States today.