Implementing partners and researchers should work closely together during the study design phase of a randomized evaluation to create a feasible implementation strategy. This resource is intended to provide a framework for researchers making study design decisions with their partners. The general approach covered here is outlined below:
Before making study design decisions, identify which stakeholders should be aware of or involved in the decision-making process. Since stakeholders’ roles will vary by intervention and by study, partners will be able to provide insight on who should be involved and what kind of insight they will be able to provide.
Questions to ask1 at the early stages with a partner to identify stakeholders might cover topics such as organizational background, program history, and preparation necessary for an evaluation (Brown et al. 2015). Examples of questions might include:
In meetings with new stakeholders, groups, or influential individuals, get buy-in before asking for feedback on the research design (Glennerster 2015).2 Consider meeting in small groups with stakeholders to allow for more candid responses. In discussions with stakeholders, researchers should consider the provider’s typical program procedures, including enrollment, recruitment, and the unit of program delivery (Arnold Ventures 2016).3 It is important to secure buy-in from stakeholders so that 1) they understand the purpose of any disruptions to their typical processes, and 2) they have an opportunity to point out mechanisms that make the proposed design infeasible so that the research team can make adjustments. While it may not be possible to ensure that all stakeholders are entirely enthusiastic about implementing the study design, researchers should make every effort to incorporate feedback, respond to ethical and practical considerations, and highlight the benefits of study results.
Case study: In an evaluation of a summer jobs program in Philadelphia, implementing partner WorkReady, their providers, and the research team placed paramount importance on implementing the lottery in a way that placed youth in appropriate jobs while retaining random assignment.4 Youth who received jobs would need a reasonable commute to their workplace, so assigning individuals to difficult-to-reach positions could not only create obstacles for the youth and the providers, but also negatively impact the research. For example, far-flung job placements could lower compliance with the program (i.e., increase dropout)—reducing the researchers’ ability to estimate the impact of the program.
To address this potential issue, researchers designed a randomization strategy that included geographic blocking based on the preferences of each provider. Applicants were subdivided into pools by the geographic catchment area appropriate for specific jobs and then randomized to either the treatment or control group for the positions.
Researchers or implementing partners may be nervous about speaking to service providers before the research questions and design are fully thought-out. Be sure to get partners’ input on the appropriate timing to start presenting the project, but also make sure to stress the importance of these early conversations. Plan to have discussions with smaller groups of people and use it as an opportunity to get feedback and learn more about program operations and needs, rather than to present how the study “will work.”
Below is a framework and questions to consider during these smaller meetings with stakeholders:
Knowledge from different stakeholders about the intervention and population will be essential for a successful study design. Researchers should engage in careful and consistent communication to ensure that study design plans make sense in the context of the intervention and adjust course when necessary.
Case study: Researchers in Ontario conducted an evaluation of a program guiding students at high schools with low transition rates to college through the college application and financial aid process. While outcomes were measured at the individual student level, the evaluation used a school level randomization strategy. The researchers chose this approach rather than targeting individual students who were more likely to graduate because it was more in line with the inclusive mission of the program, which was to help all graduating seniors at schools with low transition rates to college. Additionally, a school level strategy was less burdensome to implement than targeting individual students because whole classes could be scheduled to participate at once (Oreopoulos and Ford 2016).
Engaging stakeholders in discussion is a crucial first step, but the process of designing the study will likely involve many subsequent discussions and revisions to the proposed experimental design. Researchers should expect to repeat the process of identifying stakeholders, securing buy-in, getting feedback, and proposing revised study design decisions as the team receives new information about new stakeholders and perspectives.
Case Study: Sometimes the process of getting feedback and stakeholder buy-in reveals that a randomized evaluation will be infeasible. In one such case, J-PAL North America partnered with the Commonwealth of Pennsylvania to explore the possibility of conducting a randomized evaluation of the Centers for Excellence (COEs), a coordinated care initiative for individuals with opioid use disorders.6 Discussions with service providers and staff at the COEs revealed wide variation in care coordination practices. Because a randomized evaluation would estimate the average effect across different COEs, this variation would make it difficult to interpret the results of a randomized evaluation. After working with J-PAL North America staff and researchers, Pennsylvania ultimately decided that a randomized evaluation of the COEs would not be feasible at this time. Even though a randomized evaluation was not launched, the initial work staff did to develop a randomized evaluation was useful in thinking about how to measure the impact of the state’s many efforts to address the opioid epidemic. For example, in the process of scoping a randomized evaluation of the COEs, staff from Pennsylvania discussed how to measure outcomes such as persistence in treatment and health care utilization.
Last updated September 2021.
These resources are a collaborative effort. If you notice a bug or have a suggestion for additional content, please fill out this form.
We are grateful to Noreen Giga and Emma Rackstraw for their insight and advice. Chloe Lesieur copy-edited this document. This work was made possible by support from the Alfred P. Sloan Foundation and Arnold Ventures. Any errors are our own.
"San Code of Research Ethics" | South African San Institute | Accessed August 30, 2018.
Developed by the South African San Institute, this code of ethics for considering and implementing research projects discusses the ways in which researchers should respectfully engage with the San in South Africa. It details guidelines for respect, honesty, justice and fairness, and care that researchers wishing to engage with the San should follow to carry out a successful research project. This is a helpful overview on topics related to ensuring respectful engagement with communities participating in research and engaging communities in decisions about designing research studies.
“The Politics of Random Assignment: Implementing Studies and Impacting Policy" | Journal of Children’s Services | Vol 3, No 1.” n.d. Accessed August 30, 2018.
This article discusses challenges related to implementing randomized evaluations drawing from years of experience at the Manpower Demonstration Research Corporation (MDRC). The “Lessons on How to Behave in the Field” section shares approaches to addressing particular questions and concerns that researchers may face when developing a randomized evaluation, including making language choices that are sensitive to the partner’s perspective, not giving evasive responses to questions about random assignment, and recognizing that different staff members throughout the organization will offer different perspectives.
Washington, DC, and MT Bozeman. 2012. “‘Walk Softly and Listen Carefully’: Building Research Relationships with Tribal Communities.” NCA I Policy Research Center and MSU Center for NativeHealth Partnerships.
The Center for Native Health Partnerships (CNHP) and the National Congress of American Indians (NCAI) Policy Research Center created this guide as a resource for those engaging in research partnerships with tribal communities. The document presents values guiding research partnerships with tribes as well as helpful context and considerations related to research with Native communities.
“Real-World Challenges to Randomization and Their Solutions” J-PAL North America.
This resource is intended for policymakers and practitioners generally familiar with randomization who want to learn more about how to address six common challenges. The document draws from Running Randomized Evaluations: A Practical Guide by Rachel Glennerster and Kudzai Takavarasha.