Cost-Effectiveness and Welfare Analysis

J-PAL’s cost-effectiveness and welfare analysis team helps policymakers incorporate practical cost considerations when seeking to act on rigorous evidence.

Our approach

J-PAL produces both cost-effectiveness analyses (CEA) and a form of cost-benefit analysis known as the marginal value of public funds (MVPF) estimates.

Cost-effectiveness analysis

CEA summarizes complex programs in terms of a simple ratio of costs to impacts and allows us to use this common measure to compare different programs aiming to achieve the same outcome evaluated in different countries in different years. Through this approach, we can gain insights into which programs are likely to provide the greatest impact on a specific outcome per unit of money spent.

The cost-effectiveness ratio equals impact on a specific outcome divided by total program cost to implementer and beneficiaries

CEA is useful for comparing the costs and impacts of programs that aim to achieve the same outcome. It can be used when multiple impact evaluations measure a common outcome.

The Marginal Value of Public Funds

Policymakers may also be interested in comparing the costs and impacts of programs across sectors and multiple outcome measures. Cost-benefit analysis and welfare analysis do this by monetizing all of the benefits and costs of a program to provide estimates of its aggregate net impacts on society. At J-PAL, we often find it useful to use the MVPF, which helps us to understand the net returns of a program to beneficiaries per dollar spent by the government.

The MVPF equals net benefits to recipients divided by net government costs

By monetizing the benefits of a policy, the MVPF allows the user to compare programs that achieve different outcomes. And if a program had numerous impacts–for instance, a childcare program might increase women’s employment rates while also improving cognitive development for children–the MVPF would combine all of these benefits.

Monetizing benefits–such as estimating the impact on lifetime earnings due to increased education or calculating the net social returns due to reduced CO2 emissions–requires making assumptions, for example, about the wage returns to education or the social cost of carbon. At J-PAL, we use RCT or quasi experimental estimates wherever possible to make these assumptions. In some cases, these estimates don't exist. In these situations, we rely on estimates from surveys conducted by universities, multilateral or international organizations like the ILO and World Bank, or government agencies. If you are interested in learning more about our estimates and sources, please reach out to [email protected].

Current priorities

To help decision-makers use evidence in their everyday work, we have completed dozens of analyses focused on the cost-effectiveness of improving student attendance and learning. In addition to expanding this work to include more studies on the cost-effectiveness of different programs to improve education, we are now moving into new sectors and outcomes like women’s employment and income and household income. We are also exploring new work in the climate change space, where we are developing comparative cost-effectiveness estimates of the cost per ton of CO2 emissions abated.

Finally, conducting this type of analysis requires robust cost data: all J-PAL projects funded by our initiatives are required to collect cost data, and we are collaborating with research teams to make cost data collection easier for them while maintaining data quality.

Results and policy products

 

Impacts on student attendance

We report results in terms of additional years of schooling per US$100 spent. This metric is calculated by multiplying the average impact on participation per student by the total number of students who received the program. One additional year of schooling refers to one academic year, and not twelve months of classroom instruction. The chart below highlights findings from our analysis of key education programs that aim to improve student attendance.

 

Cost-effectiveness of programs to improve student attendance

The calculations for the chart can be found here.

Our Roll Call Bulletin provides a synthesis of the effectiveness and cost-effectiveness of an array of interventions to improve student attendance.

Impacts on student learning

We report results as the average test score improvement per student multiplied by the number of students impacted and then divided by the aggregate cost of implementing the program. Impacts are measured in terms of standard deviation changes in student test scores. Standard deviations measure how much individual test scores change because of a program compared to the average test score of the comparison group. The graph below highlights findings from our analysis of key education programs that aim to improve student attendance.

Cost-effectiveness of programs to improve student learning

The calculations for the above chart can be found here.

Our student learning policy insight synthesizes evidence and cost-effectiveness information from across 27 studies.

Our team

  • Nathan Hendren, Scientific Advisor, is a professor of economics at MIT and a Faculty Research Fellow at the National Bureau of Economic Research
  • Anupama Dathan, Senior Policy Manager
  • Donald Pepka, Policy Associate

Other resources

Partners

UK AID from the British People logo with image of UK flag

  

Partner logo

Page Content