Communicating with a partner about results
Summary
Randomized evaluations require collaboration and communication between many stakeholders, including academic researchers, research staff, implementing partners, holders of administrative data, policymakers, and the community. These stakeholders have diverse viewpoints and specialized vocabularies, which can make collaboration difficult. Strong communication, leading to partner engagement, increases the chance of a smoothly run evaluation and the policy impact potential. This resource combines experience and guidance from numerous researchers and staff in implementing randomized evaluations.
Introduction
Communicating about results with implementing partners and other key stakeholders enables these partners to make direct changes to operations, policy, or processes, and shape the direction of future programs. This document provides guidance for researchers on when and how to communicate with partners about results and progress measures of randomized evaluations. Thoughtful communication – considering what, when, and how to share results – is one element involved in fostering strong relationships with implementing partners and other key stakeholders. Strong working relationships between researchers and stakeholders may help to pave a pathway to policy impact, as partners likely have influence on whether and how to interpret, disseminate, or act on evidence generated by the evaluation. The quality of the relationship may also influence a partner’s willingness to engage in future research. Communicating about results, specifically, may be important due to:
- Ethical best practices. Subjects and partners invest time and private information crucial to the research. Ensuring these stakeholders—or their trusted proxies such as advocates or study commissioners—are informed of research results that may affect them could help justify their involvement. The Belmont Report encourages the dissemination of research results to subjects (National Commission for the Protection of Human Subjects 1978).
- Contractual requirements. Data use agreements, grant contracts, or Institutional Review Boards may impose disclosure or communication requirements.
- Context interpretation. Partners can provide valuable context for interpreting research results by illuminating relevant mechanisms or details of the intervention, implementation, or setting.
The remainder of this resource is organized into the following sections:
- Communications planning
- Intervention phase
- Post intervention: Final results
- Embargo policies
- Elements of a communications plan
Communications planning
During initial planning stages with a new partner or in a new context, gather information and make plans to communicate with research partners in ways that are sensitive to the partner’s needs, view, and political realities. Anticipating the partner’s viewpoint, motivations, and potential differences in interpretation of outcomes can enable researchers to prepare a discussion and avoid rifts. Consider the implications of highlighting certain results, and adjust communications accordingly (when it does not substantively conflict with the academic findings). This preparation and consideration will set the stage for positive communications throughout the partnership.
- Identify areas of particular sensitivity and document these so study staff will be aware of them when communicating about the research with the partner or other non-academic audiences. Some areas of sensitivity may include:
- Statistics demonstrating poor performance. Researchers may find that 90% of program staff are in compliance, and highlight this as a positive result. However, a program officer might focus on the fact that 10% of their employees are engaging in fraud.
- Politically or socially sensitive outcomes. Researchers may evaluate a large number of outcomes in a single study, while shorter synthesis documents often highlight a number of “key findings.” The choice of these key findings may imply or connote normative judgments; for example, highlighting reduced rates of single parenthood as a significant outcome may imply a normative judgement on single parenthood.
- Economic jargon and non-academic connotations. For example, “induce” is an economic term with a specific meaning, while the general connotation is to “force” someone to do something, which may be particularly inappropriate in some research settings. Similarly, economic jargon referring to “marginal” may not be appropriate in non-academic settings where “marginal” means “insignificant” or “unimportant.”
- Consider politics, power structures, and reporting lines. The order of who you inform may influence relationships.
- For example, when working with several agencies in a government, ensure the Education Secretary hears about results relating to the education project before (or at the same time as) the Health Secretary; ensure the Health Secretary has similar precedence for the health-related results.
- Senior leadership may want to be informed prior to ground-level staff, or vice versa.
- Consider the timeline of what will be shared, when, and with whom. Subsequent sections discuss what might be shared by stage of the intervention.
Other key items of a communications plan are outlined below, in the section "Elements of a communications plan."
Intervention phase
During this phase, the researcher does not have adequate data to generate final research results. Recruitment or take-up may not yet be complete, the time period may not yet have elapsed, or endline data may not yet be available. Sharing interim impact estimates—or other information such as summary statistics, monitoring data, and process measures—can help maintain engagement with the partners; however, sharing interim impact estimates without sufficient context may endanger the evaluation.
Implementers may wish to have access to interim results in order to monitor the safety and assess the ethics of continuing the intervention, while researchers may worry about decision-making based on imprecise results. A solution to this conundrum is discussed below, in the sub-section "Ongoing monitoring of safety and ethics."
Sharing interim impact estimates
Sharing interim impact estimates may:
- Maintain buy-in and relationship with the partner.
- Facilitate improvements to or consistency of program implementation. Implementing partners may be able to interpret differences in term-by-term results by investigating differences in program implementation. This can lead to refinement of guidelines and monitoring plans, leading to a more consistent program implementation.
- Enable the partner to make necessary operational or policy decisions. Partners have to make decisions on how to operate their program, and where to allocate scarce resources. Priorities or conditions that warranted the research at the outset may no longer hold; for example, if program funding is gained or lost, costs increase or decrease, or political realities change.
“If the research is not complete or published when the policy window opens, package early insights in a visually compelling, jargon-free, one-page report for policy makers. Brief them in person and specify that findings are “off the record,” not for citation before publication” (“Connecting Research To Policy” 2018).
- Enable the partner to make an informed decision about whether it remains ethical to complete the evaluation. See below, in the sub-section "Ongoing monitoring of safety and ethics," for details.
- Provide an opportunity for the partner to contextualize and interpret data or results. This may add value to the final research product; for example, by illuminating additional pathways of impact, identifying relevant follow-up questions, or identifying data limitations.
- Facilitate presentation of final results to the partner. Reviewing and discussing interim results without a publication or review deadline approaching may engender a “learning process” between researchers and partners, rather than a series of high-stakes assessments. With a more nuanced understanding of how outcomes will be assessed and measured, partners may be more ready to accept and discuss final results.
Sharing interim impact estimates also brings risk. Researchers may mitigate these risks by providing context and caveats to the findings and proactively planning follow-ups with the partner. Research teams should weigh the risks against the benefits when planning their communications strategy.
- Reputational risk. Because the research is incomplete, the results may change. This may be due to interim results being sensitive to each new data point, results manifesting over time, the receipt of new or higher quality data, updated analysis techniques, or the correction of coding errors. Discussion of "results in progress" can undermine a researcher’s reputation for producing high quality, correct work.
- Mitigation: Caveat the results by explaining that they are preliminary, explaining statistical power, and reviewing other potential factors that may affect results.
- Partner response. Whether consciously or unconsciously, the partner may respond to interim results in ways that undermine or bias the study. An indication of null results might cause the partner to change the intervention to try to improve it. An indication that the intervention is effective might cause the partner to stop the study so that members of the comparison group can benefit immediately.
- Mitigation: Anticipate potential actions of the partner, discuss concerns, and jointly plan follow-up strategies. These discussions can help distinguish between essential operational decisions and emotional responses to inconclusive indicators.
- Publication rules. The Ingelfinger rule and related Embargo policies apply to studies that are targeting medical journals. Embargo policies may be violated if the media reports on study results prior to publication in an academic journal, jeopardizing academic publication. See the section "Embargo policies" below for more information on this topic.
- Mitigation: Discuss publication rules or other contractual limitations up front, and agree on a communication strategy with the partner.
Ongoing monitoring of safety and ethics
Implementing partners or ethical review boards may be uncomfortable with being blind to treatment assignments or interim outcomes. Both the researcher and partner may bring subconscious biases: A researcher’s desire to publish or an implementer’s faith in their program may lead them to downplay risks or negative events. An implementer may take spurious positive findings as evidence of overwhelming success and seek to terminate the study. Even if the researcher or implementer are completely objective, it is not possible to demonstrate complete objectivity to outsiders.
In this case, a pre-specified agreement to certain disclosure conditions or stopping rules, or the convening of an independent Data Safety and Monitoring Board (DSMB), may alleviate concerns. Members of a DSMB have full access to treatment assignments, data, and interim impact estimates, in order to monitor client safety and treatment efficacy data during the study period. Unlike implementers, funders, or investigators, their only mandate is to ensure safety. Following pre-specified protocols and expert statistical judgement, they determine whether interim results indicate that an intervention is too dangerous to continue, or too beneficial to be withheld.
Borg Debono et al (2017) compiled a set of circumstances under which data might be shared, with whom it should be shared, and risks to sharing such data. See Table 2 of the article.
Other data analysis to share
Research, planning, and monitoring activities conducted in the course of a randomized evaluation typically produce or enable a range of outputs and byproducts. While these may be of minimal importance to a researcher until publication, they may help implementing partners understand their programs and their constituencies. Sharing these outputs may foster partner engagement and enable decision-making without jeopardizing study integrity.
“These can include: a write-up of a needs assessment in which the researcher draws on existing data and/or qualitative work that is used in project design; a description of similar programs elsewhere; a baseline report that provides detailed descriptive data of the conditions at the start of the program; or regular reports from any ongoing monitoring of project implementation the researchers are doing” (Glennerster 2015).
Consider sharing the following types of data or information:
- Needs assessment or summary of existing data or qualitative work
- Related research or literature discovered during literature reviews
- Baseline data providing descriptive statistics of the program, problem, or population
- Monitoring and implementation data, such as:
- Enrollment rates, retention, quality metrics, adherence, compliance, attrition
- The above rates by subgroup or geographical area
Post intervention: final results
At this phase, researchers have results and a draft manuscript or presentations ready to share at workshops or conferences, or to submit to a journal or post as a working paper.
The framing and discussion of academic results typically differs widely from an implementing partner’s perspective and expectations. Sending a manuscript to a partner without sufficient framing or explanation of academic jargon may lead them to interpret results differently or negatively, and cause undue frictions. To provide a platform for questions, clarification, and review of the academic context and framing, we suggest the following steps be
completed prior to publication:
- Share final results in person or through a phone call. Sharing results in a conversation or presentation will allow researchers and partners to work through the implications of the result, rather than leaving partners to interpret results without explanation.
- Discuss the partner’s perspective and concerns. Consider making relevant and appropriate edits to the draft or documents.
- Send manuscripts or written products after talking through results. Hearing results described in conversational language may help partners contextualize and interpret academic products.
In the case of a null or “disappointing” result, it is especially important to provide guidance and perspective to the partner. For example, partners may frame themselves as being committed to learning and refining their program, rather than defining their program as a failure. Researchers may also offer to help the partner develop a learning agenda and conduct or incorporate other research to improve their program. Implementers are committed to a goal and to their constituents—not always to a particular method of operation—and are positioned to seeing the good in their own work.
Researchers may also consider a “data walk” format, used by some community-based participatory research projects to facilitate in-depth engagement and understanding from partners and community members (Murray, Falkenburger, and Saxena 2015).
In addition to sharing academic products such as final analytic results, manuscripts, and presentations, researchers should provide information and results targeted to answering the partners’ initial research questions and interests, and help the partner to prepare a summary of the results from their perspective. This may include:
- A summary of results from the partner’s perspective. The partner’s research question or perspective may vary significantly from an academic researcher’s. For example, they may be less interested in mechanisms or underlying aspects of human behavior, and more interested in the impact of their program, how to improve their operations, or better target their limited resources.
- Talking points to aid the partner in external communications. The partner may receive inquiries from constituents, news agencies, or funders in response to a new publication. Talking points can help ensure the partner is equipped to respond, and reduce the burden involved in preparing such responses.
- Assistance with creating written materials, visualizations, or graphics targeted to news outlets, funders, participants, or other external stakeholders. While the partner may add context or editorial content that extends past the comfort level of an academic researcher, researcher involvement can help to produce an accurate representation of the academic results.
“Be clear and direct about the findings and what they mean in the policy maker’s jurisdiction, while still remaining neutral enough to avoid bias—or any perception of bias—and to abide by lobbying rules” (“Connecting Research To Policy” 2018).
Embargo policies
Franz J. Inglefinger, editor-in-chief of the New England Journal of Medicine, defined the Ingelfinger rule in an editorial (“Definition of Sole Contribution” 1969). With slight variations in interpretation by journal editors, US biomedical journals abide by this rule (“Uniform Requirements” 1991), which has two primary goals:
- Preventing news outlets from reporting on results that have not yet been published by a peer-reviewed journal
- Preventing researchers from publishing the same thing in multiple peer-reviewed journals
This rule does not intend to limit conference presentations or dialogue amongst researchers (including implementing partners):
“First, we exempt from the Ingelfinger Rule all presentations at scientific meetings and all published abstracts, as well as any media coverage based on them. But we discourage authors from giving out more information, particularly figures and tables, than was presented at the meeting to their scientific peers” (Angell and Kassirer 1991).
Further policies are documented in the “Uniform Requirements”, also known as the Vancouver style. These requirements also provide for academic dialogue:
“Nor does it prevent consideration of a paper that has been presented at a scientific meeting if not published in full in a proceedings or similar publication. Press reports of the meeting will not usually be considered as breaches of this rule, but such reports should not be amplified by additional data or copies of tables and illustrations” (“Uniform Requirements” 1991).
For particularly urgent or policy-relevant findings, researchers may consider asking journals whether special provisions may be allowed for releasing pre-publication findings to policymakers or journalists (“Connecting Research To Policy” 2018).
Elements of a communications plan
Prospective development of a communications plan between researchers and partners ensures a common understanding and agreement, and may prevent tension from misaligned expectations. Teams may decide to formalize this plan through a written memo, in addition to discussing in meetings or phone calls. Documenting the partner’s expectations about outcomes of interest and expected results may facilitate productive conversations about research results by cutting down on scope or mission creep.
A communications and results plan might include:
- Who will have access to:
- Treatment assignments
- Individual/study unit-level outcomes
- Aggregate outcomes to-date
- Enrollment data
- Monitoring or process data
- What, when, and with whom results will be shared during the study:
- Timelines for results sharing
- Outcomes to be shared
- External parties to consider (or embargo): news outlets, data agencies, funders, participants
- Timeline of academic publication process and influence on external communications timelines
- Reasonable effect sizes based on past literature or previous programs
- Rationale for or descriptions of:
- Blinding certain members of the study team or partners from assignments or outcomes
- What is reasonable to infer from “interim results”
- Importance of completing the planned study time period
- Importance of meeting enrollment/recruitment targets
- Importance of communicating about any program changes
- Agreement on study motivations and definitions of “success”:
- Research questions of the partner and researcher. For example, an implementing partner may focus on understanding whether their program works, while an academic researcher may focus on understanding why participants engage or respond in a certain way, or on broader theoretical questions.
- Which outcomes the partner cares most about, defined as specifically as possible. For example, for an education program, this could be: number of college credits earned, degree completion, number of years completed, starting salary, etc.
- Which outcomes the researcher cares most about, defined as specifically as possible.
- Agreement on academic and other publications:
- Whether the partner has a right to preview manuscripts prior to publication in order to check for inadvertent disclosure of confidential information
- Confirm that researchers have the right to publish results and have discretion over accepting any suggestions from the partner with respect to confidential information, description of the program, or interpretation of results
- Whether and under what conditions the partner will remain anonymous or identified, e.g., “a large university” versus “MIT”
- Guidelines for branding and approvals of public materials, like project signage, event agendas, and research and policy briefs
Last updated March 2021.
These resources are a collaborative effort. If you notice a bug or have a suggestion for additional content, please fill out this form.
Thanks to Mary Ann Bates, Amy Finkelstein, John Floretta, Sara Heller, Sally Hudson, Rohit Naimpally, Jim Sullivan, Anne Thibault, Magdelena Valdes, and Bridget Wack for their thoughtful discussions and contributions. Chloe Lesieur copy-edited this document. This work was made possible by support from the Alfred P. Sloan Foundation and Arnold Ventures.
Additional Resources
-
Uniform Requirements for Manuscripts Submitted to Biomedical Journals, 1991. This report, published by the International Committee of Medical Journal Editors, outlines what is the standard requirements for manuscript submissions to biomedical journals.
-
The Ingelfinger Rule Revisited, 1991. An editorial explaining when and why the NEJM follows the Ingelfinger rule.
References
Angell, Marcia, and Jerome P. Kassirer. 1991. “The Ingelfinger Rule Revisited.” New England Journal of Medicine 325 (19): 1371–73. https://doi.org/10.1056/NEJM199111073251910.
Borg Debono, Victoria, Lawrence Mbuagbaw, and Lehana Thabane. 2017. “Sharing Interim Trial Results by the Data Safety Monitoring Board with Those Responsible for the Trial’s Conduct and Progress: A Narrative Review.” Trials 18 (March). https://doi.org/10.1186/s13063-017-1858-y.
“Connecting Research To Policy: Community Collaboration, Policy-Relevant Findings, And Insights Shared Early And Often.” 2018. Health Affairs Blog, April. https://www.healthaffairs.org/do/10.1377/hblog20180403.254308/full/.
“Embargo: Authors & Referees @ Npg.” n.d. Accessed April 12, 2018. https://www.nature.com/authors/policies/embargo.html.
“Definition of Sole Contribution.” 1969. New England Journal of Medicine 281 (12): 676–77. https://doi.org/10.1056/NEJM196909182811208.
Glennerster, Rachel. 2015. “What Can a Researcher Do to Foster a Good Partnership with an Implementing Organization?” Running Randomized Evaluations: A Practical Guide (blog). April 9, 2015. http://runningres.com/blog/2015/4/8/what-can-a-researcher-do-to-foster-a-good-partnership-with-an-implementing-organization.
Murray, Brittany, Elsa Falkenburger, and Priya Saxena. 2015. “Data Walks: An Innovative Way to Share Data with Communities.” Urban Institute. https://www.urban.org/research/publication/data-walks-innovative-way-share-data-communities/view/full_report.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, Md. 1978.
“The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research.” Superintendent of Documents.
“Uniform Requirements for Manuscripts Submitted to Biomedical Journals.” 1991. New England Journal of Medicine 324 (6): 424–28. https://doi.org/10.1056/NEJM199102073240624.