Nurturing the null: Preparing for null results to bolster evidence use
Null results—when a study does not find significant impacts on chosen outcomes—can provide valuable insights for research and policies alike. However, it can be difficult for stakeholders to identify and leverage these insights. In J-PAL's null results blog series, we highlight randomized evaluations that yielded null results in order to elevate their lessons learned and inform future research. In this first post of the series, J-PAL staff share three key considerations for successfully designing studies and acting on null results.
Results from a well-designed randomized evaluation are important for informing policymaking and for scientific progress. However, null results—finding no impact—can be particularly difficult for researchers, policymakers, and service providers to act on. While we may hope for positive results when launching a study, the reality is that null results are common. We’ve seen this trend across our work, but particularly in health care, where a spate of well-designed randomized evaluations demonstrated that program impacts were null, even after previous observational results were promising.
These studies motivated J-PAL to develop resources on nulls, both generally and in health care specifically. We’re also launching this blog series to share lessons from J-PAL projects and making key updates to our Research Resources to incorporate lessons learned from this work.
Across our work with researchers and implementing partners, three key considerations for successfully designing studies and acting on null results have emerged.
1) Have conversations about nulls early and often
Researchers and implementing partners should discuss the possibility of null results during early project development conversations. Don’t wait until results are known; these conversations should be part of assessing the viability of a research partnership.
Researchers and implementing partners face different risks and incentives when committing to an evaluation. As much as we would like null or negative results to be seen as progress on the way to success, these results are often seen as failures that make it difficult to maintain the momentum, morale, or funding necessary to iterate and improve a program. Partner organizations should be aware of these risks, and researchers should be sensitive to partner concerns. Partner organizations should also understand that researchers must have freedom to publish and can’t shy away from particular results.
Tackling the possibility of null results head-on can help foster a commitment to the use of evidence after the study. For example, when King County’s Youth and Family Homelessness Prevention Initiative found that their program (case management plus cash assistance) had the same effect as cash assistance alone on reducing shelter entry for people at risk of homelessness, they were able to leverage their strong partnership with researchers at the Lab for Economic Opportunity to identify ways to learn from their null results. When establishing a research partnership, program implementers may want to assess whether researchers are interested in this type of long-term and collaborative engagement.
When discussing evidence use, it can be helpful to ask questions like the following:
- How will the implementing organization use the evidence from this study?
- What is feasibly changeable about the program?
- Will this research question, if results are null or negative, lead to change? If not, what question would?
- Does the organization have the time, ability, and resources to pivot based on the evidence? How can the researchers contribute to this work?
- How might a null result impact funding?
- What other evidence or data, beyond the impact estimates from the evaluation, would support the work of the partner? Often, descriptive statistics and process measures are equally interesting, and partners may not have a way to gather this data outside of an evaluation.
These conversations may need to be revisited throughout the life of a project, particularly when staff or leadership at an organization turn over.
2) Create a strong measurement strategy
The possibility of null or negative results drives home the importance of a strong measurement strategy (vital to any well-designed evaluation) grounded in a theory of change with well-powered outcomes.
- A program’s theory of change (ToC) is a causal logic model that lays out the steps through which an intervention is expected to lead to an impact. ToCs should be used to determine what constructs to measure and how to measure them. ToCs can be important tools for ensuring researchers and partners are on the same page about the goals of the program, how and why they believe it works, and the most important outcomes to include in the evaluation. In the case of null results, measuring inputs, outputs, and intermediate outcomes along the ToC can help to distinguish failures of implementation from failures of the program to produce the desired effects.
- For example, when J-PAL affiliated researchers Manasi Deshpande (UChicago) and Rebecca Dizon-Ross (UChicago) evaluated whether anticipating future benefit loss impacts human capital investments, they found that an informational intervention changed beliefs, but not behavior, and fielded a second survey to learn why. This finding led them to reassess their ToC. You can learn more about this study from this J-PAL blog.
- Sufficient statistical power helps us avoid false negatives, or concluding no impact when there actually is one. For null results to be informative, they must be precisely estimated. Conversely, imprecisely estimated insignificant results don’t allow you to distinguish between a true lack of impact or a lack of statistical power to detect that impact. To ensure informative and actionable results, researchers should take all opportunities to increase sample size, maximize take-up, and reduce attrition. Partners can provide critical insight into determining the relevant effect sizes that studies should be designed to detect by addressing what effect size would indicate success for a program. J-PAL has a number of research resources on power.
Depending on the study, it may also be helpful to:
- Measure the same or similar outcomes with multiple data sources (for example, collecting both survey and administrative data or measuring multiple outcomes that capture various aspects of a single construct) to increase the chance that (null) results reflect program impacts rather than particular measurement decisions.
- Establish prior beliefs or predictions about a program with an expert survey of researchers, policymakers, and/or practitioners in a field. Understanding prior beliefs may bolster results that may seem obvious in retrospect.
- Commit to transparently record measurement, analysis, and interpretation decisions before results are known, for example through a pre-analysis plan.
3) Leverage your partnerships to explain and share the findings
Communicating about null or negative results is particularly challenging. Researchers may have concerns about sharing disappointing news with their partners. Partners may have concerns about sharing results with funders and other stakeholders upon whom their programs rely.
We recommend researchers:
- Share results with their partners early (before a paper is written) in person or on a call and allow time in the research process to incorporate partner feedback into the manuscript.
- When ready to release results, discuss a joint narrative and media strategy with partners, which could include drafting talking points, developing opinion pieces, and proactively reaching out to journalists.
Researchers and partners may decide to communicate together or separately, depending on the message they want to share and their specific areas of expertise. Our own experience and those of the researchers we talked to drove home the importance of being humble and clear about the limitations of one study. While the focus is often on the research results, a communications strategy can also situate the study’s place in the broader body of evidence and highlight the value of the research partnership, the provider’s commitment to evidence, and the continuing importance of the problem the program aimed to tackle.
For more on communication plans, see our research resource on communicating with partners about results.