What is the difference between stated preference and revealed preference?

Last month, I attended the Society for Benefit-Cost Analysis' annual research conference. It is the yearly gathering of the top minds in cost-benefit research from across the world, many of whom were instrumental in the development of the field over the past half century. 

This year’s conference opened with a presentation by Amitabh Chandra of the Harvard Kennedy School of Government and Harvard Business School about Medicare recipients, how they respond to gaps in their coverage, and how we can try to elicit better estimates for people's willingness to pay for additional years of life. The points the speaker brought up made me want to explain a little more how we get the estimates we use throughout cost-benefit analysis.

Stated preference vs. revealed preference

The two main ways policy analysts estimate willingness to pay for goods are stated preference studies and revealed preference studies. As their names imply, in stated preference studies participants are directly asked what they’d be willing to pay for something, while revealed preference studies try to find out how people actually react to changing prices to determine their willingness to pay for goods. 

Of the two, many researchers prefer results from revealed preference studies. This is because stated preference studies can be subject to a number of biases that are hard to control. For example, people may be influenced by social pressures if asked about their willingness to pay for some goods with stigmas attached.  For example, they might overstate their willingness to pay for cancer research or understate their willingness to pay for illegal drugs. 

Revealed preference studies are more difficult to set up, but if done well they can circumvent many of these biases that stated preference studies need to control for. Someone may say they’d only buy junk food if it was cheaper than a healthy option, but if we observe them buying it at a higher price then we can better understand their true willingness to pay. 

Challenges with estimation of willingness to pay

Neither stated preference nor revealed preference studies work when people don’t really understand the value of the thing we are interested in. One area that people tend to do a bad job understanding values is in healthcare. 

Healthcare is full of decisions about low probability, high cost events and humans are notoriously bad about thinking probabilistically. It is really hard to measure how much someone is willing to pay for something like a new drug that reduces the risk of some very rare but serious disease. 

We tend to solve this problem by relying on our estimates for the value of statistical life. People can show their willingness to trade off earnings for changes in the riskiness of their job. We might discover that for example, mortality rates are 1% higher for welders working on active construction sites vs. welders working in shops. If welders working on active sites get paid more, we can take that as an estimate for how much a welder is willing to trade the risk of death for income, from which we can determine the value of statistical life.

People don’t do their own cost-benefit analysis

The main point Chandra made was that his research found that Medicare recipients acted in ways that were not consistent with our understanding of how much people value risk of death reductions when faced with budget constraints caused by gaps in their coverage. In particular, people’s noncompliance with drug regimens implied people valuing mortality risk reduction much lower than we see in, for instance, job market wage risk premiums.

This may lead us to believe that our estimates of the value of statistical life are not always accurate or useful. Indeed, there are lots of different estimates for the value of statistical life. One argument against our current values is that because they are based on revealed preferences of people making decisions about where to work and for what salary, that they might not apply to people not in the labor force (say retired recipients of Medicaid).

The vignette approach to estimating willingness to pay for mortality risk reduction

The main method our keynote speaker used to get around this problem of our revealed preference studies not lining up with the behaviors we see people take was to go back to the drawing board and pilot a new way to calculate how much people value reductions in their risk of death. 

To do this, he used a dichotomous choice stated preference approach, which is a wordy way of saying survey respondents were given two options and had to choose the one they prefer. So instead of being asked “how much would you pay for a hot dog?” people were asked “would you prefer option A that costs $5.00 or option B that costs $9.00?” If you ask enough people to make these choices and you randomly vary the prices people see when they are asked, you can accurately determine how much people are willing to pay for certain goods. 

The big innovation our speaker made was that instead of asking very specific questions, he gave his survey respondents longer vignettes about peoples’ lives to choose between. Each vignette told the story of a person’s life, where they lived, whether they married and had children or not, and importantly what their income was and how old they were when they died. His goal was to use those two facts to determine how much people were willing to trade off money for extra years of life.

This approach has two main advantages. First, it allows the researcher to control for a large set of preferences between people. With a large enough sample size, you can extract the importance of income and years separate from the other characteristics revealed in the vignettes. Second, it makes the question easier to understand for people. There isn’t some esoteric question about how much you’d pay for an extra year of life, you just see annual incomes and the age at which someone dies. Those are much easier for people to wrap their heads around. 

In the end, this week’s keynote highlighted how difficult and how important it is to understand how people value changes in their health and longevity. Traditional revealed preference methods remain useful, but they can fall short when people face complex risks they do not fully grasp. The vignette approach offers a promising alternative by grounding abstract tradeoffs in clear and relatable life stories. As the field continues refining how we estimate willingness to pay for added years of life, innovations like this show how cost-benefit analysis evolves as we learn more about real human decision making.

What are the steps of cost-benefit analysis?

“Cost-benefit analysis” is a phrase that is used in a lot of different contexts with a variety of meanings. Some people trace cost-benefit analysis in the United States as far back as Benjamin Franklin, who is said to have generated pro-con lists over a number of days to evaluate decision-making.

Cost-benefit analysis is a formalized process in the economic world. Since the New Deal era of large-scale public works, the Army Corps of Engineers has been conducting formal cost-benefit analysis to inform project selection. All major federal regulations have been subject to cost-benefit analysis for nearly half a century. How cost-benefit analysis is conducted at the federal level is the subject of Supreme Court cases. The federal government issues guidance to agencies on how to conduct cost-benefit analysis and the international Society for Benefit-Cost Analysis hosts conferences and workshops and publishes a journal on cost-benefit analysis.

To support this work, Scioto Analysis publishes the State Handbook of Cost-Benefit Analysis, a free resource for state analysts and policymakers interested in interpreting and conducting cost-benefit analysis at the state level.

But what are the steps of formal cost-benefit analysis? While different agencies have different standards for cost-benefit analysis and different contexts call for different specific approaches, the following steps separate a formal cost-benefit analysis from an informal one.

Establishing a baseline for your cost-benefit analysis

At its heart, cost-benefit analysis is a specific form of policy analysis. Policymakers have to make decisions about which policies to adopt and how to implement them. Cost-benefit analysis allows policymakers to understand how a policy works, who is impacted by the policy, and the relative share of costs and benefits of the policy borne by different members of society when it is implemented.

Because of this, having a baseline assessment of conditions is crucial for a cost-benefit analysis. In December, Scioto Analysis released a cost-benefit analysis we conducted on cigarette taxation in Ohio. Because the research we were doing relied on estimates of how much people will reduce their cigarette consumption as prices increased, the baseline number of cigarette sales was a crucial input to our model. Because cigarette consumption is on the decline in Ohio as it is throughout the country, this meant the impacts of the policy would have been smaller than if cigarette consumption was otherwise steady or on the rise.

Determining policy options for cost-benefit analysis

Next, a policy analyst needs to decide which policy options to analyze. If you are working for a policymaker, they will often tell you which options to analyze. But going deeper can be an important undertaking for a policy analyst. In a cost analysis we conducted a few years ago on climate policy in Ohio, we analyzed cap-and-trade, carbon tax, and renewable portfolio standard options for abating climate change in Ohio. In this analysis, we used cap-and-trade policies espoused in other states, renewable portfolio standards adopted by comparable states, and carbon tax levels introduced in Congress as potential policy options for Ohio.

Deciding whose costs and benefits to count

When conducting a cost-benefit analysis, an analyst needs to determine standing early, or whose costs and benefits to count. Should we count people all across the world or just in the jurisdiction the policy applies to? What about residents who are not citizens? What about people who commute into the area? These are all questions that need to be answered by a policy analyst as they conduct a cost-benefit analysis because they can have significant impacts on the outcome of the policy.

Identifying impacts in a cost-benefit analysis

Next, an analyst needs to determine which impacts to analyze in the cost-benefit analysis. This usually involves consuming literature, understanding what economists and researchers have established about what policies like this have done in this and comparable jurisdictions. This is usually a step in the process where analysts can get creative and expand their scope, seeing how many potential impacts there are of a policy as well as the research behind them. In a recent cost-benefit analysis we conducted on wildlife crossings, we analyzed impacts ranging from loss of life from wildlife collisions to the benefits of connecting ecosystems to the cost of pouring concrete. 

Quantification and Monetization in cost-benefit analysis

This is what many would consider the “heart” of cost-benefit analysis: the process of taking impacts and putting numbers on them, then converting them into dollar amounts. The goal here is to use the best available evidence to quantify the impacts of policies on key social outcomes of interest. The analyst will then use this to put dollar figure amounts on each impact based on the social costs and benefits levied by those impacts.

Discounting costs and benefits

Discounting is a key element of cost-benefit analysis. Dollars spent on programs today cannot be spent tomorrow, so there is a future social cost to investing dollars today that must be reconciled with benefits accrued later. This also can work the other way: benefits gained now can lead to costs down the road. This is what we found in our cost-benefit analysis of school closings for COVID-19.



The specific rate that benefits and costs should be discounted at is a subject of debate among scholars of cost-benefit analysis. Analysts usually consult sources of guidance such as the federal government’s Circular A-4 or the textbook by Anthony Boardman et al for answers.

Sensitivity analysis of your results

All analysis includes assumptions. Good analysis tests those assumptions and sees what the results are. Using techniques like partial sensitivity analysis, best-case/worst-case analysis, break-even analysis, and Monte Carlo simulation helps analysts understand how much their results rely on assumptions and which assumptions will impact results most if they are wrong. It can also be a good tool for understanding how likely results are to be directionally correct, like how the Washington State Institute for Public Policy presents their benefit-cost results.

Telling your story

Last, the analyst has to share her results! This could be in a report, a press release, a presentation, or anything else you could think of. But good communication is key in cost-benefit analysis. Even preliminary results or presentation of a list of impacts can be valuable to a policymaker trying to craft better policy. After all, often the process of cost-benefit analysis is more important than the results.

Grants or tax breaks: which fight poverty more efficiently?

At Scioto Analysis, we frequently analyze poverty and the strategies designed to reduce it. One of the most common tools to combat poverty in the United States is welfare spending, or spending directed at needy families to alleviate poverty. While the efficiency and effectiveness of these programs are long-standing points of debate, research from the United States Census Bureau's 2024 Poverty in the United States report shows that year-over-year, welfare programs continue to reduce the number of people in poverty. The figure below shows the top seven federal programs in 2024 in terms of the total number of fewer people in poverty.

Welfare programs in the United States generally fall into two categories: cash assistance (direct payments) and in-kind benefits (direct goods or services like food, healthcare, or housing). We can also distinguish between how welfare programs are administered. Some programs are administered by state and local governments like Medicaid, Supplemental Nutrition Assistance Program, and Temporary Assistance for Needy Families. Others are tax-administered welfare programs, such as the Child Tax Credit and Earned Income Tax Credit, managed by the Internal Revenue Service.

Many policymakers favor in-kind welfare spending because it ostensibly allows policymakers to exert more control over use of funds by needy families. They hope this means household spending goes toward goods deemed necessities by policymakers and prevents use of funds on entertainment or other goods. A drawback of in-kind spending is that governments does not know the range of different needs felt at the household level and management of household budgets at the level of government leads to inefficiencies throughout the economy.

We can see this play out in how households spend cash when they receive it. In the chart below, U.S. Census Bureau data from 2021 shows that most recipients of the Child Tax Credit, a tax-administered cash assistance program, spend money on a range of goods. If Congress changed the cash transfer child tax credit to ten in-kind spending programs, it is unlikely it would be able to predict the amount of spending needed for these families in the correct quantities to achieve an efficient allocation of resources.

So, if direct cash payments are more efficient tools than in-kind benefits to alleviate poverty, why is a program like Temporary Assistance for Needy Families, which provides direct cash assistance to needy families in the United States, plagued with issues? The main answer is an issue of access. Over the past several years, Temporary Assistance for Needy Families has seen more stringent eligibility criteria, work requirements, and time limits. The goal of these policy changes is to improve participation in the labor force and help reduce reliance on welfare programs, but the actual result is a worsening of deep poverty rates.

A compelling underlying reason I see behind the ineffectiveness of TANF is the administration of the program. More focus is placed on regulating TANF than ensuring access. In addition, discrepancies between state and local government administration of TANF, along with many other welfare programs administered by state and local governments, means that poverty alleviation is starkly different state-to-state.

Another important consideration is scale. Since Temporary Assistance for Needy Families has been block granted and capped, the program is much smaller than other programs so does not help as many families as Supplemental Nutrition Assistance Program, the Earned Income Tax Credit, and the Child Tax Credit.

Using data from the Administration for Children and Families (2024), the U.S. Department of Agriculture (2023), the Supplemental Security Income annual report (2025), the Medicaid and CHIP Payment and Access Commission (2024), and the Congressional Research Service (2018), we can compare administrative costs as a proportion of total spending across different welfare programs. The chart below shows these administrative burdens for five selected welfare programs.

The original key takeaway from these numbers is that welfare programs are generally pretty efficient, with all programs spending 10% or less of their budgets on administration. The more interesting takeaway I see is how much more efficient the Earned Income Tax Credit program is than other forms of welfare. To be specific, the only two direct cash assistance programs shown in the chart are Temporary Assistance for Needy Families and the Earned Income Tax Credit, though Temporary Assistance for Needy Families also includes funds for other programs like job training and childcare. The proportion of funding for the Earned Income Tax Credit spent on administration is one-tenth the size of TANF’s. It could be the case that cash-based welfare programs administered through the tax code, like the Earned Income Tax Credit, avoid the bloat of state and locally administered programs, while also having stronger effects on poverty reduction.

Despite these statistics, the administrative benefits of welfare programs administered through the tax code may not be perfect. For instance, if we take into account the money that households spend on filing taxes to receive these kinds of benefits, the private administrative burden of receiving welfare increases. One study found that if we assume the tax filing fee to be 17.5% instead of 0%, the administrative burden of receiving Earned Income Tax Credit benefits increases to 11%. However, many households who receive the Earned Income Tax Credit are often eligible for free tax filing services, meaning that the real administrative burden to receive these benefits is likely somewhere in the middle. 

In the 2024 tax season, the Internal Revenue Service introduced the IRS Direct File program, which ensured free tax filing services for filers with an adjusted gross income of $89,000 or less. The program has since been repealed, but it may be a worthwhile policy to keep administrative costs for programs like the Earned Income Tax Credit low while increasing accessibility for low- to moderate-income earners across the country.

One significant burden that tax-based welfare programs like the Earned Income Tax Credit can relieve is the benefits cliff. According to researchers at the Urban Institute, many earners who are near welfare cutoffs face higher marginal tax rates than some of the highest earners in the country. When accounting for taxes and reduced income from losing welfare benefits, marginal tax rates can easily reach 60% when households move toward full-time or gain a second earner. Tax-based welfare programs which reward people for working can help reduce these disparities.

Despite these advantages, there is still a significant gap in the efficiency of the welfare system in the United States. While tax-based cash assistance is a great tool for rewarding labor, the most disadvantaged households often fall through the cracks, either by not earning enough to require a tax filing or by not meeting work requirements set by welfare programs administered at the state and local levels.

What is a regulatory budget?

A few weeks ago, I was at the Society for Benefit-Cost Analysis’ annual research conference. One of the sessions I attended was a panel discussion about the role of regulatory budgeting and how we should think about the tradeoffs involved when we try to limit the cumulative burden of federal rules.

What is regulatory budgeting?

The main idea proponents of regulatory budgeting would argue is that regulations have economic costs and we should be cognisant of those and not try to overwhelm people with costs. Even though regulations may have benefits that outweigh costs, we still should not just regulate everything so that we don’t overwhelm some people with costs. 

This mirrors the logic in fiscal budgeting. Even though there are countless ways for the public sector to spend money that creates benefits exceeding costs, we place some limit on how much money agencies spend.* Financing debt comes at a cost states don’t want to incur too much of. From a regulatory perspective, policymakers may want to impose a similar limit so that they allow firms to operate with some independence and allow markets to optimize themselves. 

Another way to think about these budgets is that in order to increase the financial budget for the public sector, we need to raise revenue (presumably via taxes) and we know that the process of raising public revenue creates some drag on the economy. The budget requires policy makers to try to optimize that limited spending potential. Economist Cass Sunstein coined the term “sludge” to describe the similar burden created by the paperwork required to ensure compliance with each additional regulation. A regulatory budget can cap the sludge the same way the financial budget caps the burden of taxation. 

How policymakers implement regulatory budgeting

The United States currently has one form of regulatory budgeting in place thanks to a Trump administration executive order. Currently, any executive branch agency that wants to implement some new regulation must first identify 10 existing regulations to repeal in its place. During the first Trump presidency, there was a similar 2-for-1 rule in place. While economists tend to agree these approaches are a hammer for a problem that needs a scalpel, these blunt approaches  attempt to accomplish the goal of putting some cap on the economic burdens of our regulations. 

Systems like the 10-for-1 or 2-for-1 regulatory budgeting are ineffective approaches to regulatory budgeting because not all regulations have the same cost. If OSHA wants to require some new design for hardhat, that would require companies to purchase some new safety equipment. That does not impose the same cost on the economy as something like the EPA banning the use of gasoline as a transportation fuel. Why is it then that if these agencies wanted to impose these regulations they’d both have to pick some random selection of existing regulations to repeal?

The better way to implement a regulatory budget would be to require agencies to estimate the cost of a regulation and give these agencies an actual budget to work with.   

Regulatory budgeting and cost-benefit analysis

It may seem at first glance that regulatory budgeting is the same as just doing half of a cost-benefit analysis, but this isn’t necessarily true. There is a semantic issue that arises in cost-benefit analysis sometimes when dealing with negative expected values for certain parameters. 

Say you are comparing two policies against a baseline of not enacting either. One option costs a lot of money but provides some health benefit, while the other saves money and reduces health outcomes. Which of these should we consider costs or benefits? In the first case, we clearly have a financial cost with a health benefit, but in the second we have a health cost and a financial benefit. Whether an impact counts as a “cost” or a “negative benefit” is largely in the eye of the beholder.

Cost-benefit analysis has allows analysts to work through these contradictions, but accountants need to have better defined terms. Certain things would need to be fixed as the “costs” of a regulation so that comparison to the budget can be made universally. 

A well‑designed regulatory budget wouldn’t replace cost‑benefit analysis, but it could complement it by forcing agencies to confront the cumulative burden of their rules. The challenge is defining costs clearly enough that budgets are comparable and enforceable. If we can get that right, regulatory budgeting could become a useful tool for managing the overall weight of federal regulation without losing sight of the benefits rules are meant to deliver.

*Some progressives floated the idea that federal government spending can be uncapped because the federal government makes the money it borrows, but this idea has been largely dismissed by economists. Every U.S. state currently works to balance their state government budgets, most due to constitutional amendments requiring them to do so.

How is the federal poverty line determined?

If you go to data.census.gov, you can find the number of people in poverty in your state, county, city, or even zip code. Scrolling through, you can see breakdowns of who is in poverty by race, gender, household size, and a number of other characteristics.

In 2026, the federal poverty line is $33,000 for a family of four. Does this sound reasonable to you? There are certainly people who want to make this number lower, higher, or different. But the determination of the federal poverty threshold is a bureaucratic process that is in the hands of analysts every single year.

Who sets the federal poverty line?

While the United States Census Bureau in the United States Department of Commerce is the federal agency we most associate poverty measurement with, it is the United States Department of Health and Human Services that is in charge of issuing the federal poverty guidelines. These guidelines include the thresholds for a number of household sizes and include formulas for determining poverty status for very large families as well.

The Department of Health and Human Services has been in charge of issuing poverty guidelines since 1981, when Congress tasked the Secretary of the Department with issuing them every year. Prior to 1981, the guidelines were issued by the Community Services Administration, an independent agency that was folded into the Department of Health and Human Services in 1981 under the first budget of the Reagan administration.

According to the Department of Health and Human Services, the calculation is quite simple: analysts take the previous year’s poverty thresholds then adjust them for inflation, using the Consumer Price Index for All Urban Consumers as their inflation metric.

But where did the federal poverty line originate?

Where the federal poverty line came from

In 1964, President Lyndon B. Johnson launched his War on Poverty with the goal of “total victory” over poverty in the United States. To wage war against poverty, Johnson needed a metric to estimate the impact policies were having on poverty. This necessitated an official poverty measure.

Social Security Administration Economist Mollie Orshansky had been working on a threshold for poverty since 1963. By 1965, the Office for Economic Opportunity (which later became the Community Services Administration) adopted Orshansky’s measure.

Orshansky’s model was simple. At the time of the Great Society War on Poverty, the average American household spent one third of their income on food. Orshansky then wagered that three times the cost of a “thrifty food plan” would constitute poverty income.

This measure has changed very little over the six decades since its adoption. In the early days of the measure, Orshansky updated the measure annually to adjust for the cost of food. In 1969, the updates were simplified to be tied to the Consumer Price Index instead. Early measures of the official poverty threshold estimated different income needs for male- and female-headed households and farm- and non-farm households. These were all eliminated in a 1981 reorganization of the measure. Subsequent changes to the calculation of the Consumer Price Index have changed the trajectory of the measure over time. But at its core, the official poverty measure has barely changed since it was first introduced in 1965.

Problems with the federal poverty line

The world has changed over the past six decades. The core conceit of Orshansky’s original official poverty measure was that multiplying a thrifty food plan by three would give a reasonable estimate for the income needed to survive in the United States.

Over the past six decades, though, the cost of food has plummeted compared to the cost of other goods. The average family spends about an eighth of their income on food now. Other costs have risen: medical expenses have increased from 7% of GDP in 1970 to 18% of GDP in 2024. Housing prices have doubled the pace of overall inflation since the 1960s.

Changes in housing prices have led to significant divergences in cost of living across the country. While a nationwide poverty measure may have been reasonably accurate in the 1960s, there are now vast differences in the cost of living across states. Because of these changes, the assumption that escaping poverty takes the same amount of income in California as it does in West Virginia is now much weaker.

Income sources have also changed dramatically since the War on Poverty. The Official Poverty Measure was an excellent tool for measuring poverty when incomes were largely just wages and social security. The institution of large safety net programs like the Supplemental Nutrition Assistance Program (formerly “food stamps”), the Earned Income Tax Credit, free school lunches and the Child Tax Credit have led to much of income for lower-income people coming from places other than wages and social security.

Alternatives to the official poverty measure

In 1995, the National Research Council convened a consensus group to recommend updates to the federal poverty guidelines. This project culminated in the release of a consensus report that recommended changes to the federal poverty calculation: tying it to a broader range of goods than just food, adjusting for cost of living in different parts of the country, including income from new safety net measures in the calculation of income resources. This report sat on a shelf for over a decade until New York City became the first place to put the measure into action, calculating its New York Poverty Measure using the new methodology. The Census Bureau followed suit the following year, calculating the first Supplemental Poverty Measure, which is now released every year alongside the Official Poverty Measure numbers. States like California, Wisconsin, and Ohio have their state versions of these.

The Supplemental Poverty Measure made major headlines in 2022 when the Census Bureau reported on the record reduction in poverty made by the 2021 expansion of the federal child tax credit. Because the Supplemental Poverty Measure included the child tax credit in its income calculation, it was able to do something the official poverty measure could not: show the impact of public programs on poverty.

Another rival to the Official Poverty Measure is “relative poverty,” which is usually defined as having less than half of the median income. This is a popular measurement for poverty in the Organisation for Economic Co-operation and Development since it gives a good benchmark for poverty across countries. It’s also easier to calculate and a little bit of a different take on the suspect “subsistence needs” approach to poverty that the Official Poverty Measure takes.

To this day, the Supplemental Poverty Measure is the most widely accepted measure among poverty researchers, but it is still supplemental. Since the Supplemental Poverty Measure shows higher poverty rates in coastal areas and lower poverty rates in the middle of the country than the Official Poverty Measure, adopting it as a guidance for issuing federal benefits would lead to a redistribution of resources that would be politically difficult to say the least. For the time being, Molly Orshansky’s Official Poverty Measure will continue to be the measure of poverty in America, with little change from its inception in 1965.

Can Ohio learn from an immigration crackdown a century ago?

This month, Ohio landed in international news. The Guardian reported on ICE’s new “Operation Buckeye,” an initiative to deport Somali residents of central Ohio and throughout the state.

While deportations represent the most visible and sometimes violent policy being enacted in the United States today, the largest impacts on immigration in today are coming from limiting immigration.

According to researchers at the Brookings Institution and the American Enterprise Institute, 100,000 more people left the United States in 2025 than in 2024. But nearly 2 million fewer people immigrated into the United States in 2025 compared to the previous year.

When all is said and done, reductions in immigration were 19 times as high as total increase in emigration.

This means that the United States had a historically low net immigration rate, with about as many people leaving the Land of Opportunity in 2025 as came in.

I have written in the past about how slowing immigration will impact Ohio in the future.

I also asked a question to Scioto Analysis’s Ohio Economic Experts Panel about how the state’s immigration slowdown will impact the economy.

Most economists said it would lead to higher prices, nearly all said it would lead to less small business formation, and all of them said it would lower tax revenue.

A conversation I had on social media recently about an article written on that experts panel led me to an interesting question: has this happened before?

Douglas Buchanan of the Columbus Metropolitan Club asked me if there was economic fallout from U.S. restrictions on immigration enacted in 1924.

The Immigration Act of 1924 is considered by many to be the among most restrictionist immigration law passed in U.S. history, often mentioned alongside the Chinese Exclusion Act of 1882.

Responding to public fears about demographic change, Congressman and Eugenics Advocate Albert Johnson championed legislation that enacted quotas on immigration in an attempt to keep the country’s “white” population at the level established by the 1920 census.

The economic literature shows a range of economic effects from this legislation, mostly negative.

Danish economists studying the change found the immigration crackdown led to long-term population declines for areas in the U.S. that previously relied on immigration.

They also found manufacturing productivity dropped due to less availability of labor and that native worker job quality dropped, presumably due to fewer people buying goods due to fewer immigrants coming into the country to buy goods.

An international team of researchers looked at how farms responded to the reduction, finding they moved from labor-intensive agricultural techniques to more reliance on technology.

In short, restricting immigration led to more jobs…for tractors.

According to one Chinese researcher, the 1924 crackdown did have one interesting side effect: spurring the Great Migration.

With many employers in large northern cities looking for workers, they turned to Black migrants from the South, which led to new opportunities for Black American workers.

On balance, restricting immigration leads to fewer consumers, fewer workers, fewer entrepreneurs, fewer inventions, fewer ideas, fewer skills, and less of an edge for Ohio’s economy.

As the state and local governments are trying to figure out how much to support or oppose federal efforts to deport and restrict immigration, they cannot ignore that the damage they do to the lives of immigrants will spill into the lives of American-born residents as well.

This commentary first appeared in the Ohio Capital Journal.

Survey: Economists mixed on how H2Ohio bonds will impact Ohio’s economy

In a survey released this morning by Scioto Analysis, 11 of 17 economists agreed that statewide bonds for the H2Ohio program will reduce the cost of water treatment and public health services for local governments.

Governor DeWine is currently speaking with legislative leaders about placing a bond measure on Ohio’s fall ballot in an attempt to ensure continued funding of H2Ohio, Governor DeWine’s water quality initiative. H2Ohio includes farmer financial incentives to reduce agricultural runoff and investments toward restoring wetlands, funding sewer and water infrastructure projects, and removing dams.

Most respondents agreed that statewide bonds for the H2Ohio program will reduce the cost of water treatment and public health services for local governments, with 5 economists uncertain and 1 economist disagreeing. According to Bill LaFeyette of Regionomics, “The algal bloom the other year in Lake Erie that crippled Toledo's water supply is just one example of the costs of inattention to agricultural runoff.” Other economists showed uncertainty on the cost-effectiveness and magnitude of the statewide bonds. 

9 of 17 economists agreed that statewide bonds for the H2Ohio program will increase the size of Ohio's outdoor recreation industry. However, economists’ confidence in this statement varied widely. For example, Kevin Egan of the University of Toledo agreed that costs would go down “only if the program spends the money wisely and actually solves the harmful algal bloom problem in Lake Erie and other lakes”. Of the 8 remaining economists, 1 economist disagreed and 7 economists were uncertain. 

Opinions on whether statewide bonds for the H2Ohio program will grow Ohio’s economy were more mixed, with 6 economists uncertain, 5 economists agreeing, and 3 economists disagreeing. Charles Kroncke of Mount Saint Joseph University who strongly agreed with the statement explained, “If Ohio is known as a state that thinks ahead and has infrastructure that supports generational health, companies and individuals will feel good about locating here.” On the other hand Curtis Reynolds of Kent State University strongly disagreed, indicating the impacts on Ohio’s economy would not exist “in any measurable way, just not enough to make an impact on "Ohio's economy" as commonly understood by voters.”

The Ohio Economic Experts Panel is a panel of over 30 Ohio Economists from over 30 Ohio higher educational institutions conducted by Scioto Analysis. The goal of the Ohio Economic Experts Panel is to promote better policy outcomes by providing policymakers, policy influencers, and the public with the informed opinions of Ohio’s leading economists. Individual responses to all surveys can be found here.

Does inequality matter?

In a couple of months, Scioto Analysis will be releasing its second study on inequality in Ohio. This study will be an update of the  2022 study it produced in collaboration with graduate students at the University of California, Berkeley’s Goldman School of Public Policy.

In 2018, I wrote a blog post for Gross National Happiness USA’s blog post Serious About Happiness. In this blog post, I argued that there are five frameworks that can be used to assess whether a society is serving its residents well: economic growth, poverty, inequality, human development, and subjective well-being.

Of these five frameworks, inequality can sometimes raise the most objections. Only extreme partisans object to the benefits of economic growth or poverty reduction. Almost everyone supports human development (income, health, and education) and subjective well-being as social aims. But inequality can sometimes cause pause for people.

Philosophical approaches to inequality

In his 2018 book Why Does Inequality Matter?, American Ethicist T.M. Scanlon lays out six justifications for why inequality carries moral weight. Without getting into the details of the book, here are his justifications:

  1. Inequality arises because certain people were not given proper consideration.

  2. Inequality creates unjustified inequalities in status.

  3. Inequality gives some people control over other people.

  4. Inequality makes economic institutions unfair.

  5. Inequality makes political institutions unfair.

  6. Inequality arises from unfair institutions.

Scanlon’s justifications for the moral weight of inequality make sense on their surface. They seem a step away, though, from the criticism John Rawls levies at inequality in his political philosophy classic A Theory of Justice.

In A Theory of Justice, Rawls invites readers to place themselves behind a “veil of ignorance.” Behind this hypothetical veil, people know nothing of their place in society, their race, their gender, their abilities, their family, their wealth, their height, their charisma, their anything. He then asks people to imagine what a just society would look like to someone living behind that veil.

Rawls argues that someone behind the veil would want (1) access to freedoms that allow them to experience the many lives that they may want to live beyond the veil, and (2) access to equal resources, with any inequality of resources only justified by increasing the total resources for the least well-off in society.

Arthur Okun: a policy analytic approach to inequality

Economist Arthur Okun brings up a similar thought experiment in his “leaky bucket” analogy, which undergirds the policy analytic concept of equity-efficiency tradeoffs. Okun argues that social welfare programs can be characterized as “leaky buckets,” where redistributive policies, through market distortions and administrative spending, cause “leakage” in the economy while resources are being redistributed.

He, like Rawls, argues we can use our intuition to determine how much leakage we are willing to accept to reduce inequality. To Okun, the problem with inequality is self-evident, and the problems with policies to reduce it are tradeoffs inherent in public policy.

Marx: the political problem with inequality

Marxist approaches to inequality argue that inequalities lead to social collapse. Marx argued that capitalist society was inherently unequal, that capital inherently created inequality between two classes: capital owners and workers. Marx’s perspective was that this inequality would ultimately lead to revolution.

More contemporary Marxists use this historical determinism as a justification for strong social safety net systems. As far back as the 1880s, Otto von Bismarck of Germany was credited with pioneering the first social welfare state as a tool for social stability. He was fending off challenges from socialist political rivals but also characterized universal health and accident insurances and old age and disability pensions as tools for creating social and political stability.

The economic problem with inequality

More contemporary economists have argued that inequality hampers economic growth. There are a number of reasons this may be the case. Inequality could lead to more centralization of consumption or production, allowing producers to exercise market power to keep wages artificially low or consumer good prices artificially high. 

Inequality could also stifle innovation. A 2018 study provocatively titled “Lost Einsteins” found evidence that children born into the top one percent of the income distribution are ten times more likely to be inventors than children born into the bottom 50 percent of the income distribution. Unequal access to resources in childhood could be choking innovation, which, along with people and capital, is one of the three drivers of economic growth under the classical economic growth model.

The intuitive problem with inequality

Of all these approaches to understanding inequality, the types that ring the most true to me are those Rawls and Okun put forth. There is something inherently troubling about inequality.

When I consider the richest people in the world, most of them are the opposite of the “lost Einsteins” found in the U.S. innovation ecosystem: they started with a leg up. Elon Musk is an heir to an emerald fortune. Google Co-Founder Larry Page’s father Carl Victor Page Sr. was a pioneer in computer science and artificial intelligence. His co-founder Sergey Brin was a third-generation computer scientist.

The amount of their wealth is staggering. Each of these people is a centibillionaire, with net worth of $839 billion, $257 billion, and $237 billion respectively. Elon Musk’s wealth is the same as the income of 25 million families of four at the U.S. federal poverty line.

I think the strongest objection to inequality is one that Rawls alludes to in his “difference principle.” This is his claim that differences in resources should be justified by improvements in resources for the least well-off in society. Maybe inequality is not so great, but maybe it unlocks other things by giving people incentives to create social goods like Tesla, Google, Amazon, and Facebook.

But this is why I like inequality–as a part of a portfolio of outcomes for a good society. Inequality should not stand on its own as the only yardstick for a good society. But this is why policymakers should be considering economic growth, poverty, human development, and subjective well-being alongside it. Maybe there will be policies that reduce poverty and increase inequality, which is what Rawls alludes to in his “justice as fairness.” Or maybe we are okay with a little bit of inequality if it leads to a lot of economic growth, or increases in education, or increases in health outcomes, or increases in happiness.

But we’re fooling ourselves if we don’t think, all else being equal, that a more equal society is better than a more unequal one. While reducing inequality cannot be the only goal of society, it certainly should be among its goals.

What is the impact of paid paternity leave?

This year, Minnesota became the thirteenth state to offer paid family and medical leave for all workers. Paid leave has been a topic we’ve been following for some time, with both myself and my colleague Rob writing multiple blog posts about it.

One of the reasons I’ve been so interested in paid leave policies is because they are an interesting case study in how one policy can be viewed through different lenses. Our first time studying paid family leave was part of our work with an anti-poverty group, but most people approach it from a labor market perspective. 

The reason paid leave policies get so much attention from people studying labor markets is because they attempt to address the gender wage gap. The idea is that by allowing paid leave policies allow mothers to remain more attached to the workforce, which in turn might lead to higher wages should they choose to return. Results looking at the introduction of paid leave programs so far have been mixed on its impacts. 

However, a new study on reform in the paid leave program in Denmark offers some new insights about this issue. 

Denmark has long had a paid parental leave program that combines some weeks reserved for each parent with additional weeks that families can divide however they choose. In 2022, Denmark reformed the system to require a more even split between mothers and fathers.

Before the reform, mothers had 14 weeks of non‑transferable leave and fathers had just two, with another 32 weeks that either parent could use. After the reform, both parents received 11 weeks of earmarked leave, and the shared portion shrank to 26 weeks.

Importantly, Denmark didn’t expand the total amount of leave, families still get 48 weeks off in total. The change was purely about how those weeks are allocated.

 The first thing the researchers find is that the reform worked exactly as intended. Fathers increased their leave by about three and a half weeks, and mothers reduced theirs by a bit more than five. These researchers weren’t just measuring labor market outcomes though, they also surveyed parents to understand their attitudes about people taking leave. 

After the reform, parents became more supportive of paternity leave and more likely to say that fathers taking leave is socially acceptable at work. They changed their responses to certain questions about traditional gender roles, for example being less likely to agree that young children suffer when mothers work full time.

These belief changes translated into behavior. The study finds that the reform narrowed gender gaps in earnings and hours worked. Some of this is mechanical in the first year because fathers are out of work more and mothers less, but the effects persist into the second year, after both parents have returned to work. The earnings gap shrank by nearly three percentage points in year two, and the hours gap by about one and a half. 

But the study also highlights a real tradeoff. Parents were less satisfied with their leave arrangements after the reform, largely because they felt mothers should have had more flexibility. By reducing the number of flex weeks from 32 to 26, the state is meaningfully reducing the number of options people have when deciding how to split up their paid leave time. 

This Danish reform is a useful reminder that the same policy can look very different depending on the lens we use. From a labor‑market perspective, it clearly narrowed gender gaps and shifted norms in a more equal direction. From a family‑autonomy perspective, it reduced flexibility and left many parents less satisfied. As more states adopt paid leave, this case study shows why it’s important to consider a range of relevant outcomes when considering policy reforms. Policy myopia can make a policy look good while ignoring broader impacts that can be very relevant to the policy at hand.

What does daylight savings time do to the economy?

My alarm went off this morning at 6:30 like it does every weekday, but something was different today. I was noticeably more tired than I normally would be at and it was much darker than usual outside. 

Unless you live in Arizona or Hawaii, you probably had a similar experience this weekend as the country collectively shifted its clocks one hour forward in observance of daylight savings time, the bi-annual tradition of causing avoidable problems and making people ornery

Why we change our clocks

The reason we have daylight savings time is that in the summer months, there is an excess of daylight. People thought that it would be nice since the sun rose before many workers began their days to take an hour of sunlight away from the morning and give it to the evening so people could stay out later. 

Additionally, it is commonly thought that daylight savings reduces our energy consumption because we can rely less on electric light during the summer months. In fact, savings from electric light consumption were cited as the original reason that Germany first implemented daylight savings time during the first world war. 

However, if you live in a northern state near the eastern edge of a timezone, this may not be quite so appealing in the winter months. Residents of the town of Fortuna, ND wouldn’t see the sunrise until almost 10:00 am on the shortest days of the year if they had permanent daylight time.

What happens when we change our clocks

While switching between daylight and standard time seems like the best of both worlds, it comes at the cost of everyone collectively feeling more drowsy than usual twice a year. These disruptions to our circadian rhythm have very tangible costs.

Studies have found that this switch leads to modest increases in cardiovascular problems, and it has led to more fatal car crashes. Add to that the fact that this drowsiness leads to short-term losses in productivity and it starts to seem like switching isn’t such a good idea after all.

In 2023, we conducted a cost-benefit analysis of ending daylight savings time in Ohio. We estimated that these two switches cost the state about $40 million per year. 

Standard time or daylight time

If we decide we no longer want to change our clocks twice a year, we will then need to agree on what time it should be. In our study, we found that there are benefits to both permanent standard time and permanent daylight time. 

Permanent standard time means more light in the morning and less in the evenings. A study looking at daylight savings time in Indiana found that contrary to popular belief, daylight savings actually increased energy use. While this study did find that electric light use decreased because of daylight savings time, that reduction was outweighed by increased costs associated with heating and cooling. This means that by adopting permanent standard time, we’d expect the amount of energy we consume to actually decrease over the course of a year.

The main benefit of permanent daylight time is that the sunlight lasts later into the evening. One notable effect this has is a reduction in crime due to the shift of light later into the evening. 

From a cost-benefit perspective, we find that these two benefits are roughly equivalent to each other. It’s a fascinating policy analytic case where there really isn’t a clear winner, but rather a matter of preference.

Either way, it’s clear the costs of changing each year outweigh the benefits. To those who might balk at the change, worrying about the sun not rising till almost midday or setting in the early afternoon, I offer this advice: move closer to the equator. 

Until humans develop a way to slow the rotation of the earth, we won’t be able to actually change the amount of sunlight we get each day. Shifting it around might make us feel like we have some control over the sun, but at the end of the day if you live farther north in this country you get less daylight. If policymakers accept this fact and stop messing with people’s sleep schedules each year, the economy and its participants will thank them.