
Null results—when a study does not find significant impacts on chosen outcomes—can provide valuable insights for research and policies alike. However, it can be difficult for stakeholders to identify and leverage these insights. In J-PAL's null results blog series, we highlight randomized evaluations that yielded null results in order to elevate their lessons learned and inform future research. In this first post of the series, J-PAL staff share three key considerations for successfully designing studies and acting on null results.
Results from a well-designed randomized evaluation are important for informing policymaking and for scientific progress. However, null results—finding no impact—can be particularly difficult for researchers, policymakers, and service providers to act on. While we may hope for positive results when launching a study, the reality is that null results are common. We’ve seen this trend across our work, but particularly in health care, where a spate of well-designed randomized evaluations demonstrated that program impacts were null, even after previous observational results were promising.
These studies motivated J-PAL to develop resources on nulls, both generally and in health care specifically. We’re also launching this blog series to share lessons from J-PAL projects and making key updates to our Research Resources to incorporate lessons learned from this work.
Across our work with researchers and implementing partners, three key considerations for successfully designing studies and acting on null results have emerged.
Researchers and implementing partners should discuss the possibility of null results during early project development conversations. Don’t wait until results are known; these conversations should be part of assessing the viability of a research partnership.
Researchers and implementing partners face different risks and incentives when committing to an evaluation. As much as we would like null or negative results to be seen as progress on the way to success, these results are often seen as failures that make it difficult to maintain the momentum, morale, or funding necessary to iterate and improve a program. Partner organizations should be aware of these risks, and researchers should be sensitive to partner concerns. Partner organizations should also understand that researchers must have freedom to publish and can’t shy away from particular results.
Tackling the possibility of null results head-on can help foster a commitment to the use of evidence after the study. For example, when King County’s Youth and Family Homelessness Prevention Initiative found that their program (case management plus cash assistance) had the same effect as cash assistance alone on reducing shelter entry for people at risk of homelessness, they were able to leverage their strong partnership with researchers at the Lab for Economic Opportunity to identify ways to learn from their null results. When establishing a research partnership, program implementers may want to assess whether researchers are interested in this type of long-term and collaborative engagement.
When discussing evidence use, it can be helpful to ask questions like the following:
These conversations may need to be revisited throughout the life of a project, particularly when staff or leadership at an organization turn over.
The possibility of null or negative results drives home the importance of a strong measurement strategy (vital to any well-designed evaluation) grounded in a theory of change with well-powered outcomes.
Depending on the study, it may also be helpful to:
Communicating about null or negative results is particularly challenging. Researchers may have concerns about sharing disappointing news with their partners. Partners may have concerns about sharing results with funders and other stakeholders upon whom their programs rely.
We recommend researchers:
Researchers and partners may decide to communicate together or separately, depending on the message they want to share and their specific areas of expertise. Our own experience and those of the researchers we talked to drove home the importance of being humble and clear about the limitations of one study. While the focus is often on the research results, a communications strategy can also situate the study’s place in the broader body of evidence and highlight the value of the research partnership, the provider’s commitment to evidence, and the continuing importance of the problem the program aimed to tackle.
For more on communication plans, see our research resource on communicating with partners about results.