Ethical conduct of randomized evaluations
Summary
This resource is intended as a practical guide for researchers to use when considering the ethics of a given research project. It draws heavily from J-PAL’s own ethics training for research staff and Rachel Glennerster and Shawn Powers’s chapter in the Oxford Handbook of Professional Economic Ethics (2016).
Readers who are familiar with the framework of Respect for Persons, Beneficence, and Justice in evaluating research ethics and are primarily looking for concrete implementation tips may jump to the sections titled “Implementing… in practice” towards the end of each main topic. Readers interested in J-PAL's activities to ensure ethical conduct in our work can jump to the discussion beginning at Ethics and research at J-PAL.
For a detailed discussion of the ethics of randomization as a research tool, we refer the interested reader to a number of excellent resources on this topic, including Glennerster (2014), Glennerster & Powers (2016) (starting on page 12 of the ungated version), and Glennerster (2017). J-PAL’s resources on the ethics of RCTs include a discussion in Introduction to randomized evaluations, a discussion in the IRB resource under the justice principle, and J-PAL North America’s Common questions and concerns about randomized evaluations.
Please note that this resource is not comprehensive, and the authors are not ethicists. Additional resources on ethics in social science research are listed at the end of this guide. For a practical guide on the process of submitting an Institutional Review Board (IRB) proposal, please see the corresponding resource.
Introduction: Ethical framework
Field staff and researchers may face all manner of complex situations in both project development and implementation that present an ethical or moral dilemma.
- For example, a study interviewing doctors may discover that they are behaving against the ethics of their profession; reporting this misconduct to their employer could harm the study participants (the doctors), while not reporting could harm their patients.1
- A surveyor may return to interview a household and discover that the family just became homeless because they were unable to pay their rent, and that they were hoping for financial support from the research team who they invited into their home regularly for several weeks.
- A researcher working with refugees may have to decide how much cost to incur to track down participants and make sure that inclusion in the study is truly equitable.
Ultimately, these situations occur because the social sciences study the behavior and life circumstances of human beings. What is the researcher’s best course of action? What are their ethical obligations?
In a number of countries, ethics review boards verify a study’s overall ethical acceptability and its compliance with legal regulations on research with human subjects. In the US, this task falls to an Institutional Review Board (IRB); in addition, studies conducted by US researchers but taking place in a different country must typically also be reviewed by an ethics commission in the country of research and follow local regulations for subject protection.
That said, approval by an IRB or other ethics commission does not necessarily mean that all ethical dilemmas are resolved. For example, many real-world, unanticipated situations are not covered by IRB approval. A study may, to the IRB, appear to provide the right balance between benefits and risks for the sample as a whole, but may not look right to surveyors or individual participants. A feature that is important to the research design, such as randomly drawing payment amounts, may appear unethical or unfair to field staff.2 Local perceptions of what is moral, fair, or just may also differ across contexts, sometimes in unanticipated ways.
As such, IRB review cannot substitute for the researcher’s responsibility to consider the ethical implications of their research and ask themselves as well as members of the communities they work in if they are comfortable with their research protocols. This includes carefully thinking through the research design to anticipate how subjects will feel about the research (and those carrying it out), as well as any moral dilemmas that staff may find themselves in, and address them in study design and staff training.
To do so, it is important to have an ethical framework for ensuring the study is both designed and implemented ethically at every stage. This involves being proactive by addressing ethical flaws in study design early and making plans for things that might go wrong (known as procedural ethics) (Guilleman & Gillam 2004). It also involves being reactive by implementing ethical practices and responding appropriately when the unexpected happens (known as reflexivity) (Ibid.). The purpose of this resource is to help researchers develop a systematic framework for designing and implementing ethical procedures in field research.
How research ethics contribute to research quality
Ethics considerations are sometimes discussed as an obstacle or extraneous burden on the research. In truth, the opposite is the case: ethical considerations are deeply enmeshed with good research design—though this should not be the main motivation for conducting ethical research.
Good study design (such as sufficient statistical power) is more likely to be ethical, as participants are more likely to benefit from the research. Well-designed studies with reliable results are a better use of limited resources than those that are poorly designed.
A study that is ethical in its study procedures and implementation approach is often more credible. Subjects who feel they are treated with respect, adequately compensated, asked to comply with study procedures that are designed to minimize the burden on participants, and otherwise treated well, are less likely to respond in ways that would compromise the validity of the information collected. These responses may include becoming upset or angry, giving false or no answers, or refusing to comply with intervention protocols. Burdensome surveys may have higher rates of attrition, higher measurement error, and more missing data. Implementing partners may also withdraw cooperation or withhold information, or be less inclined to trust researchers in the future.
Unethical research procedures may also have long-term effects. For example, Jamison, Karlan, and Schechter (2008) show that deception introduces selection bias into the subject pool by causing certain types of participants to return at higher rates (specifically, men who had positive outcomes in previous studies) and by shifting participants’ behavior, such as by making their choices more erratic/irrational (exemplified by higher variance and inconsistent behavior in the study’s risk aversion game).
At the extreme, unethical research could compromise access to the population under study and ultimately forestall efforts to address the problem under investigation. For example, high rates of medical mistrust among African Americans are related to the long history of “dangerous, involuntary, and unethical experiments...carried out on African American subjects since the eighteenth century,” including the Tuskegee syphilis study, and have continued to have lasting, negative impacts on public health for these communities (Ball et al. 2013, Alsan and Wanamaker 2016).
Carefully consulting and including feedback and consent from local partners, community members, and government in research design and implementation is critical to conducting high-quality, ethical research that aims to truly understand and effectively address a policy issue affecting the target population. It is particularly valuable in international research, where there may be language and background differences and potentially a power differential between foreign researchers from high-income countries and the local population that limit the exchange of information and learning.
Ethical principles and the Belmont Report
In 1978, prompted by the atrocities committed by US researchers against African American men in the Tuskegee syphilis study, and remembering the many cruel acts committed by the Nazis in research, the US government commissioned a group to "identify the basic ethical principles that should underlie the conduct of biomedical and behavioral research involving human subjects and to develop guidelines which should be followed to assure that such research is conducted in accordance with those principles" (Belmont Report 1979).
The resulting 1979 Belmont Report summarizes these guidelines. It outlines principles, rather than specific rules, in order to provide an analytical framework to help researchers and reviewers assess cases. It draws and builds on prior international research ethics agreements, including the Nuremberg Code of 1947 and the Helsinki Declaration of 1964. For concreteness, and given its role as the main organizing framework for IRBs and maintaining ethical practices in research in the United States, we use the US Belmont Report and its three pillars of Respect for Persons, Beneficence, and Justice as a base from which to build, while recognizing the existence of comparable regulations in other countries.3
Respect for persons
The right to decide about being involved in research
The respect for persons principle states that individuals should be treated as autonomous agents. This means recognizing individuals’ rights and abilities to decide for themselves what to do, what to participate in, and what to allow their personal information to be used for in research. This principle also recognizes that some people—such as minors, prisoners, and those who are otherwise vulnerable—may have diminished autonomy and that providing information may not be enough to allow them to make an informed decision in their best interest about their participation in research. As such, this principle also states that persons with diminished autonomy are entitled to additional protections.
Implementing respect for persons in practice
- Seeking individuals’ informed consent (and assent, if working with minors) and ensuring assent/consent is truly clear and easy to understand, thereby allowing them to independently decide and volunteer to participate in a study. Consent may be required to administer a program or procedure, or to collect personal data, whether through interviews, secondary or administrative data, or other means. It is a requirement for most studies. Under certain circumstances, such as when potential risks to participants are minimal, researchers may seek a waiver from the IRB of some or all aspects of informed consent, but consent can only be waived with explicit IRB approval. See the end of this subsection for useful resources on informed consent and our IRB resource for more information.4
- Compensation, possibly in the form of in-kind gifts such as soap or sugar, should offset the time and inconvenience of participation. It is not a benefit of research and is in addition to reimbursement of direct expenses. It should not be exploitatively low but should not be so high as to potentially undermine participants’ abilities to make a rational decision about the risks and benefits of the study (Largent and Lynch 2017). Gelinas et.al (2018) provide a framework for identifying the risk of coercion and undue influence and designing payments to respondents. See also the discussion in the Define intake and consent process resource.
- Additional protections for individuals who are considered vulnerable to coercion or undue influence or who may have diminished autonomy (and so are not fully free to or capable of self-determination). In the US, Federal regulation 45 CFR 46 imposes specific additional requirements for researchers to protect members of vulnerable populations, such as those with impaired decision-making capacity, or economically or educationally disadvantaged individuals. As above, these populations may include minors, prisoners, and those with diminished autonomy.
- When working with vulnerable populations, there is a balance between providing individuals the opportunity to participate in research should they so choose out of respect for them and their autonomy, versus ensuring they have adequate protections against coercion and undue influence. Extra requirements should not be so burdensome that they in practice preclude anyone who is “vulnerable” from participating in the research, but it is still necessary to have some additional checks to ensure that research participation is informed and voluntary (Singer and Bossarte 2006).
- For example, in South Africa, many are suspicious of the government and of signing official looking forms. They would agree to participate in a study but refuse when written consent (a signature) is required. In particular, women will often say they cannot sign anything without their husband approving first. This opens up the dilemma of wanting women to participate and be heard, which requires moving away from written consent. In this instance, the researchers got approval from the IRB to use verbal consent in this population (Jack, McDermott, and Sautmann, mimeo).
- When working with populations who cannot legally give consent (such as minors or the cognitively impaired), it is critical to obtain individuals’ assent in addition to consent from a legal guardian. Whether assent must be documented is up to the IRB, but children as young as seven (OHRPc) or even younger can provide basic forms of assent. For example, field staff can ask for participants’ assent about being touched for height/weight measurements and get clear verbal assent before proceeding.
- When working with vulnerable populations, there is a balance between providing individuals the opportunity to participate in research should they so choose out of respect for them and their autonomy, versus ensuring they have adequate protections against coercion and undue influence. Extra requirements should not be so burdensome that they in practice preclude anyone who is “vulnerable” from participating in the research, but it is still necessary to have some additional checks to ensure that research participation is informed and voluntary (Singer and Bossarte 2006).
Additional resources on informed consent
- Anja Sautmann's annotated informed consent checklist and an example informed consent checklist from the US Office for Human Research Protections (OHRPb)
- Our resource on IRB procedures details the processes and requirements for informed consent and assent.
- A detailed discussion of informed consent and compensation in Laura Feeney’s/J-PAL North America’s Define intake and consent process resource.
- A chapter by Rachel Glennerster and Shawn Powers (2016) in the Oxford Handbook of Professional Economic Ethics covers the same issues discussed in this resource in some depth; the chapter includes discussions of informed consent, as well as when informed consent is not sufficient.
- Pages 27-33 of Glennerster (2017) draw on the above chapter and discuss informed consent.
- Alderman et al. (2016)’s nuanced take on confidentiality and informed consent, particularly in low-literacy populations.
- Further guidance from the OHRP’s 45 CFR 46 2018 requirements, particularly §46.116 and §46.117.
Beneficence principle
Under the principle of beneficence, the two key points are (1) do not harm, and (2) maximize benefits while minimizing risk. In particular, (1) serves as a check for whether a study should take place at all by asking whether the potential study benefits sufficiently outweigh the risks.
“Do not harm” and “minimize risks, maximize benefits”
Research involves trade-offs. For example, participants may be exposed to a risk of harm in a study where the harmfulness of an intervention is not yet known. The beneficence principle of “do not harm” requires researchers to evaluate for each study if the future benefits from the research justify the risk to the subjects, that is, it decides whether the study can go forward at all.
Though not one of the Belmont principles, a related concept is equipoise, the idea of “genuine uncertainty within the expert community” about the preferred treatment (Freedman 1987). Frequently cited in the medical community, the requirement of clinical equipoise suggests that researchers may continue a trial only until they have enough statistical evidence to convince other experts of the validity of their results. When applied to social science, this concept has been modified to indicate uncertainty about “the extent to which the intervention being tested should be made accessible to the population that falls under the scope of the research” (Kukla 2007). This modification takes local context into account and indicates that we may be in a state of equipoise if we don’t know how to effectively target a program, or how to allocate resources between one program and others.
David McKenzie (2013) argues that the application of equipoise in development economics should also consider whether the study’s use of funds makes people better off than they would be under other possible uses of funds. More recently, MacKay (2018) proposes the concept of policy equipoise as a tool for designing RCTs evaluating policy interventions. Policy equipoise expands on the concept of clinical equipoise by taking into account resource constraints and requires that, for each arm of the study, there be genuine uncertainty about how it compares with the other arms as well as the best viable alternative policy. That is, in no treatment arm can a participant be predicted to fare better or worse than they would have in a different arm or under the counterfactual policy (which could be the status quo or an alternative, viable policy that is proven better but is not the status quo). The implications of policy equipoise are discussed at greater length in MacKay (2018, 2022) and Asiedu (2021). MacKay (2022) discusses the considerations necessary for a design to be distributively fair in cases where randomization is necessary due to scarcity.
If it is decided that the study can go forward, beneficence requires researchers to design study procedures, including the intervention itself, with the goal of “minimizing potential harms while maximizing benefits”.
Practical steps to minimize the risks associated with a study are described below. The Belmont Report considers a research benefit to be “something of positive value related to health or welfare.” Note that under this definition, compensation does NOT count as a study benefit, as it is intended to make respondents whole (e.g., to offset time and other costs that are incurred by participating in the study) or to serve as an incentive for participation.
Knowledge gains, for instance about the nature and extent of a particular problem and the effectiveness of potential solutions to address it, are typically a key benefit to the study. Potential benefits must therefore be evaluated in the context of the credibility of the research results. For example, a study that is so poorly designed that the results are not credible has no benefit and would thus fail a risk/benefit ratio assessment because it would expose individuals to the burdens of participation without any knowledge gains (Kukla 2007). Similarly, studies that are underpowered will not have credible results.
In general, any measures that maximize data quality increase the potential benefits from a study. It is therefore the responsibility of the researcher to avoid poorly worded surveys, data loss, high levels of unexplained non-compliance or attrition, and so on. Maximizing benefits also has implications for sharing de-identified data and disseminating findings beyond the academic community so that the data and results can also benefit other researchers, policymakers, and, in particular, the population from which the data was collected.
The following subsections describe these goals in greater detail; see also Alderman et al. (2016) for case studies that illustrate how potential harm can arise even with carefully designed, well-intentioned studies.
Implementing the “do not harm” principle in practice
The do not harm principle has two primary implications:
- Do not administer an intervention or treatment or conduct data collection that is known or highly likely to be harmful.
In some cases, the benefits to the research may be sufficiently large that an IRB would consider them to outweigh the potential harm to participants. A similar case could arise when the study has an expected small negative impact on some people but an expected positive impact for society.
Note that “small negative impact” can include the inconvenience of sitting through a long survey; the modalities of data collection (with appropriate compensation, see below) are typically considered an acceptable burden. However, what is “small” must of course be carefully weighed by researchers and evaluated by an IRB.
More importantly, in a randomized study the impact of the intervention itself must be separately evaluated. Are subjects receiving the intervention only because of the study, and does this intervention carry a risk of harm?
Note that this does not imply that interventions you suspect to be harmful should not be studied—IF they would be implemented anyway by an external party. On the contrary, we need to know about the effects of such interventions. However, the researcher cannot create or administer a harmful intervention for the purpose of studying it.
Given sufficient information by the researchers on potential harms, benefits, risks, and externalities, IRBs are well-placed to assess these types of scenarios.
-
Do not deny participants an intervention, treatment, or services they would otherwise be entitled to receive. Entitlements usually arise out of a law, and so there would be legal on top of ethical barriers to denying such rights. Examples of entitlement programs might be subsidized health care services or in-kind benefits for households below a certain income threshold. See also challenges #2 and #3 in J-PAL North America's Real-world challenges to randomization resource.
Note, however, that a treatment that you, the researcher, are paying for and implementing is not an entitlement that all participants have a right to receive: without the research, the treatment would not exist at all.
Implementing the “minimal risk” principle in practice
In some cases an intervention could be approved for research if it can result in harm but the harm is highly unlikely. The research can (and should) provide protections for these cases by mitigating the harm ex post. Possible measures include conducting follow-ups by phone or in person or providing vouchers or contact information in the case of adverse events. For example, one could have a doctor on call to provide medical help in case of an adverse reaction to a vaccine, or financial means to "bail out" a participant who took a loan they can't service.
Aside from any risks from the treatment in a randomized study, risks in research also arise from the data collection itself. These can be psychological, social, and/or physical (e.g., shame, embarrassment, or physical retaliation). Even with carefully designed procedures, mistakes can and do happen, and unexpected events can occur, so it is important to prepare for worst-case scenarios, closely monitor for them, and proactively address any issues as they arise. Protocols should aim to reduce both the likelihood of harm occurring and the magnitude of harm that could occur.
Exposure of sensitive or private data is the most common risk in social science studies, and research protocols should carefully guard against this risk.5 Researchers need to protect participants’ data by developing clear and enforceable data security protocols and protecting personally identifiable information (PII). For guidance on these topics, please see our resources on data security procedures and protecting personally identifiable information. Protecting participants’ data (and conveying that it will be protected) is likely to improve the quality of information collected.
Working with local collaborators can be invaluable here. Local researchers and implementation partners will typically be much more familiar with the context in which the study is conducted and attuned to potentially sensitive questions or context-specific concerns, including those about the intervention itself. Often, they will be able to suggest solutions that are not easily available to international researchers.
Examples of protocols in research and survey design to minimize potential harms include, but are not limited to, the following:
- Conduct interviews in private whenever possible and contextually appropriate, especially when the interview concerns sensitive topics. This is both the most ethical path and is also likely to give the highest quality data: respondents may be unwilling to share truthful answers to sensitive questions if they believe others may be able to hear.
- Reduce the magnitude and probability of harm with careful construction of sensitive survey questions and procedures. What is considered sensitive information varies by context, and it is useful to consult with local collaborators about sensitivity before finalizing a particular research strategy or question format (Dickert & Sugarman 2005), then pilot interviews with field staff and in focus groups with out-of-sample community members to make sure they are appropriate to ask and worded appropriately.
- Consider the psychological or emotional burden of the survey. For example, in studying ways to improve women's reproductive health, researchers may ask participants about their pregnancy history including whether they have experienced miscarriages. It may be emotionally taxing for participants to recall these events, so researchers should minimize the number of questions related to them, carefully wording sensitive questions, and putting follow-up procedures in place. Researchers should also carefully consider the demographics of the field team and whether it is appropriate for the context. For example, it may be less emotionally burdensome to talk to female surveyors on topics related to domestic violence or sexual and reproductive health.
- Train staff on reacting appropriately to respondent answers, especially if the topics are potentially sensitive or emotional. Develop a protocol for identifying when people are overly distressed, as well as a reporting and escalation plan for when this occurs. A common requirement for research on sensitive topics, such as domestic violence, teenage pregnancy/sexual activity, or depression/suicide, is that survey staff have the ability to refer respondents to free/subsidized support services or counseling.
- Consider the cognitive burden of questions: even questions that are not emotionally charged, such as going through the costs of all items purchased in the past week, may be psychologically taxing. More on minimizing cognitive burden is covered under Survey design.
- Consider the time required for the study. This includes being cognizant of the time per survey and, for example, offering light refreshments and/or scheduling small snack breaks if it will be very long (more than 2 hours), which both relieves the burden on participants, and also helps maintain data quality. Compensate participants for their time.
- Factor in how the location of the study affects the burden to participants: are enumerators visiting homes, and respondents feel obligated to provide refreshments? Are participants asked to go to a certain location (e.g., a clinic or office) for a doctor’s appointment or interview? Location choice affects participants’ direct expenses and time, and can affect participant and surveyor comfort, safety, and visibility.
- Consider the sample size: is the study so well-powered that you are subjecting more people than necessary to your research procedures? If so, consider reducing the sample size such that the study is still sufficiently powered, without an unnecessarily large sample.
Justice: Distribution of burdens and benefits
The justice principle indicates that those who take the risks should receive the benefits, in other words, it concerns participant selection. However, determining what constitutes a just selection (and distribution of burdens and benefits) is complex.
Justice could just mean to give every participant an equal share of the intervention, say, a cash grant to small businesses. Alternatively, it could be that benefits should be distributed based on individual need (businesses earning the lowest profits), merit (the most successful small businesses below a certain threshold), effort (owners who are willing to extend hours and engage in extra advertising), societal contribution (owners who pay employees higher wages), or even expected returns to the intervention (businesses that will see the largest growth as a result of the grant). An obstacle here is of course that many of these factors may not be known before the study takes place.
Randomization can be one way to eliminate some types of bias, such as selecting only friends for an intervention believed to be beneficial, and can thus be aligned with justice in research:
- Random sampling from the population leads to an equal expected distribution of burdens and benefits from being included in the research.
- Random assignment to the treatment or comparison groups leads to equal expected distributions of burdens and benefits from the intervention.
Randomization does not necessarily address concerns of justice at the broader societal level of how the population/location of study was selected, however. For example, it cannot help with challenges such as obvious and measurably different levels of need or vulnerability. It could also be that people in a certain area or about whom reliable administrative data exists are often the subject of studies because they are accessible by researchers—is it “just” that they are always the ones to bear the burden of research? Full informed consent and an evaluation by the IRB are particularly important in this context.
Finally, random assignment also does not address concerns of justice if, without the study, all participants would be eligible for the intervention. Some intervention designs, such as phase-in designs, allow for greater justice in the sense that no one is permanently denied access (Glennerster & Takavarasha 2013).6
Implementing the justice principle in practice
- Consider who is targeted for inclusion in the research, and who is excluded. Researchers should ensure that the study population represents the population experiencing the problem and the population that stands to benefit, either as a direct result of participation or as result of knowledge generated from the study. This is important from both an ethical perspective and in terms of maximizing knowledge gains.
- This is also important to consider when using administrative data. For example, researchers may use data from programs that mostly target lower income groups, such as Medicaid in the United States. Yet the people in such datasets are not a random subset of the population and, depending on the research, may be less likely to actually benefit from policy change. A question then arises of whether it is fair to use their data, when higher income people do not need to provide such detailed data to the government.7
- Do not exclude certain subpopulations, unless they will never receive the research benefits. For example, it may be more expensive or less convenient to include individuals who are not in official records or other administrative data (perhaps because they don’t file tax returns), those who live in geographically distant areas that are difficult to access, or those who are difficult to reach over phone due to poor cell phone service. Excluding certain groups results in unrepresentative (and therefore less useful) data and can have unintended, negative consequences.8
- As an example, for many years women were excluded from clinical trials due to concerns about potential adverse reproductive effects, other seemingly benevolent and protective reasons, and the (incorrect) assumption that medical treatments would work the same way in women as they do in men. The result was a lack of evidence on the safety and pharmacodynamics of many drugs for conditions that substantially affected women, leaving women in the general population at risk of inappropriate dosage (Liu and Mager 2016).
- Consider designing a study that is powered to detect heterogeneous treatment effects for certain subgroups. For example, you may expect that the treatment will have heterogeneous effects based on gender of the household head, size of the firm, or certain medical conditions such as diabetes.9 While subpopulations should not be oversampled for convenience reasons, oversampling or stratified random sampling (with strata for the groups of interest) may be justified if you believe there will be heterogeneous effects for certain subgroups and want to ensure you are powered to detect these effects. This may also reduce the overall sample size. In determining whether oversampling is just, consider whether it is truly necessary (rather than just administratively convenient), weigh the alternatives, and make sure you are able to make an ethical justification for oversampling.
When evaluating justice especially, keep in mind that these are all questions of degree. There is a general recognition that grant-funded research is subject to resource constraints. The critical thing is to carefully consider who you are targeting and whether that adequately aligns with the population that is affected by the problem you are studying and the potential beneficiaries of related policy change.
Considerations for other populations
The Belmont principles are written from a medical perspective. They focus on the direct subjects of research—in line with the patient model for medical studies—but provide less guidance about risks to non-subjects. In economic or other social science studies, there is a greater chance of people other than those directly involved in the study being affected by the research. As a general rule, researchers should try to extend the beneficence principle to all populations involved and reduce the burden of research wherever possible. This includes additional considerations for research staff, as well as community members who are not directly involved in the study but affected by it nonetheless (Glennerster & Powers 2016).
Research staff
Research staff are subject to both physical and psychological risks while carrying out research studies. Physical risks can include violence, as well as heightened exposure to everyday risks such as communicable diseases or road accidents. These risks are present across study contexts. For example, during the 2010 US Census, there were over 430 reports of violence against census workers (Morello 2010; Conan 2010). Traffic safety is a concern anywhere and is heightened by research staff traveling frequently during the study (Ali 2010). In some areas, it is unsafe for women to travel alone, and studies on certain topics such as domestic violence put surveyors at additional risk (Ellsberg & Heise 2002). Steinert et.al 2021 review the physical, emotional and psychological effects of field research on research staff. Field team safety and security should be protected and is discussed in greater detail under field team management.
Furthermore, research procedures can be draining for research staff. If the study concerns sensitive topics such as prior trauma and abuse, surveyors may need additional training and support throughout the survey to prevent potential negative effects such as secondary traumatization or burnout from hearing a trauma victim’s story—just as professional counselors need professional support systems (Canfield 2005).10
Another potentially emotionally draining procedure is that of notifying participants of their treatment versus comparison group status. For example, enrollment for an RCT evaluating the Nurse Family Partnership in South Carolina was done by nurses, rather than an external research team, in a setting where nurses told participants in person whether they had been selected for the intervention (Baicker et al., ongoing). In such instances, additional support may be needed to minimize the burden on the team carrying out the research (Baicker 2016).11
As in the three principles detailed above, supporting research staff is important not just for conducting ethical research, but for conducting quality research. Staff who are burnt out from emotionally intense interviews, or even from conducting many interviews without a break, will lead interviews less effectively. Unsafe travel modalities for enumerators may lead to difficult-to-reach subpopulations being underrepresented in the study. It is also important to discuss moral dilemmas that arise from the research protocols carefully with research staff, address their concerns, and equip them to answer questions from external parties. Otherwise, a design feature such as drawing payments randomly will appear unfair without any justification. A consequence might be that enumerators are less willing to comply with the research protocol, for example, by letting people with a low payment draw a second time—so that what seems to field staff the "right" thing to do is harming the research.12
Implementing ethics protections for research staff in practice
- While violent incidents are very rare, it's important to ensure that staff are prepared for worst case scenarios and be able to access support. Discuss violence during interview training to reduce distress in the field, and provide an opportunity for surveyors to discuss their own experiences of violence as appropriate (Ellsberg & Heise 2002).
- Teach field staff de-escalation strategies and basic self-defense.
- Provide regular opportunities for debriefing and individual counseling when dealing with trauma and abuse (Ellsberg & Heise 2002).
- Explain the importance of adhering to sampling protocols or treatment assignment protocols to staff, in language they can understand, to maintain motivation and engagement. The point that a poorly done study might not have any benefits, and all the effort from both subjects and surveyors was in vain, can help here.
- Provide surveyors with talking points and good explanations for procedures that may be upsetting or appear unfair to subjects. This will make it easier for field staff to explain and defend their work.
Other populations: Consider the community
Depending on the research design, the people who may face risk of harm may not be those participating in the study. IRBs and regulations focus on study participants (those whose data is used in the researcher or who receive the study intervention), making it particularly important that the research team carefully consider who else might also be affected by the research and take steps to protect them from harm. This is particularly true if it is not feasible to get consent from everyone who could be affected, for example because it is difficult to identify them.
Spillovers to the rest of the community can occur as a result of the research protocols or act of surveying. For example, focus group discussions about who is wealthy or poor in the community could lead to information about some non-participating households being shared—information that these households may prefer to be kept private. It is important to proactively consider how study protocols can be designed to mitigate potential negative spillovers. In the example described this could be by having focus group participants sign confidentiality agreements, or conducting one-on-one interviews instead of group conversations. Note also that in this case, the households on whom there is spillover could be study participants and as such this risk of harm should also be evaluated by the IRB.
Spillovers may also occur as a result of the intervention. For example, a job search support program may generate a negative spillover to non-participants, who face increased competition in finding a job. This is a situation where the requirements of research ethics and research quality coincide. If there is an expectation that spillovers could be significant, the study should be designed to measure them to understand the full impact of the intervention,13 as accounting for spillovers can drastically change the effect (and cost effectiveness) of an intervention.14 This means that the research will include those for whom there is a potential for negative spillovers as study subjects. This is particularly true if the expected spillovers are likely to be negative. If these spillovers cannot be measured, researchers should strongly reconsider whether the study should occur in the first place.
Communication of results
How results are interpreted, used, or acted upon can have implications for the overall ethical impact of a study. While researchers cannot directly control what decisions or policies are made based on the research results, they can take steps to ensure the results of the research are communicated in a way that is accessible, both in terms of level and platform, to relevant non-academic audiences. These steps are even more important when results are difficult to interpret—for example, studies with null results or complex mechanisms—or when describing a sensitive topic or vulnerable population. Sturdy (2022) discusses a potential framework for evaluating and communicating the ethical implications of a study
Asiedu et al. (2021) propose the use of a structured ethics appendix to proactively communicate the ethical ramifications of the research. Their suggested framework prompts authors to discuss policy equipoise15; potential and foreseeable harms of the research to participants, nonparticipants, and research staff; plans to provide feedback to participants; researchers’ intellectual freedom, and the role of the researchers with respect to research design.
Proactive measures to facilitate ethical communication about the research
- Ensure the study is well-designed and well-powered: this reduces the likelihood that results will be misinterpreted, used to support contradictory arguments, or even cited to imply that there is more expert disagreement on an issue than there actually is.16
- Select a partner whose goals for participating in the research include using the research results for decision-making.
- Communicate carefully with the partner about what the study will be able to measure in order to manage expectations from the start. In addition to being key decision-makers of whether and how to act on results, partners may have connections with other key decision-makers. Communicating clearly with partners about how to interpret results can therefore have effects well beyond the partner itself (Glennerster & Powers 2016).17
- Consider providing distilled, accessible versions of the results that can be disseminated to the wider public and other stakeholders. For example, Frequently Asked Questions (FAQs) can be a useful medium for communicating the results, and it is also important to work with news and media outlets on making sure the public message matches the research results (Rao 2018).
- Consider communicating the study results back to the participants or the communities that were involved in the study.
- Consider addressing anticipated questions through an ethics appendix to the study’s working/published paper (Asiedu et.al. 2021)
Ethics and research at J-PAL
J-PAL works to reduce poverty by ensuring that policy is informed by scientific evidence, and we hope that the research conducted at J-PAL ultimately serves the communities who participate in it. Ethical conduct of this research is therefore core to J-PAL’s mission, and it is of utmost importance that every study is done ethically from start to finish.
We ensure this is the case through a number of proactive measures implemented across all of our offices. This begins with, but is not limited to, the requirements specified in our Research Protocols (see below), which must be followed by every project funded or implemented by J-PAL. Projects carried out by J-PAL offices are regularly audited for adherence to these protocols.
J-PAL affiliates and invited researchers are also expected to adhere to J-PAL’s code of conduct and that of their home institutions. We provide additional resources on ethics through written materials linked on the Information for Affiliates page, and researchers can draw on the expertise of our experienced local staff.
Ethics training for research staff
Research staff at all levels of the organization work to ensure that the Research Protocols are followed throughout the project lifecycle. At the time of hiring, all J-PAL staff members complete a Human Subjects Research training approved by the Massachusetts Institute of Technology IRB.18 New research staff at J-PAL also attend a week-long Research Staff Training course run jointly with Innovations for Poverty Action (IPA) that introduces J-PAL’s Research Protocols and includes detailed, hands-on lectures on ethics, IRB, field team management, and data security.
Ethics in project inception and implementation
J-PAL Research Protocols require that all projects obtain IRB approval and follow all procedures approved by the IRB. This includes either developing an informed consent procedure, or obtaining an explicit waiver of consent, for any data collection.
Surveyor training covers informed consent processes in depth, and surveyors sign certificates of confidentiality before fieldwork begins. As needed, surveyor training also includes sessions such as de-escalation training and special instructions for interviews involving sensitive topics.
We additionally provide resources for staff and surveyors, such as support for those doing emotionally intense interviews. More details on these practices are covered under J-PAL’s guides to Field team management and Surveyor hiring and training.
While PIs are ultimately responsible for designing their research projects, J-PAL staff and surveyors are available to advise on cultural context, and we work closely with local partners to ensure community voices are heard, that questions are asked in a culturally appropriate way, and that projects are sensitive to local context.
Protections for private data
J-PAL emphasizes data security at all phases of the project lifecycle and supports projects and J-PAL affiliates to this end. J-PAL Research Protocols include explicit guidance on steps to take in order to protect participants’ data. Data security lectures (J-PAL internal resource) and hands-on labs in our annual research staff training give project staff the tools needed to set up data flow processes, separate personally identifiable information (PII) from the main data as soon as possible, and encrypt all PII, especially when shared via a cloud service, using software such as Veracrypt and Cryptomator. After data collection has concluded, staff remove all data from the devices used for surveying to prevent accidental access to confidential information.
J-PAL also supports affiliated researchers in preparing data for publication in repositories such as J-PAL and IPA’s Datahub for Field Experiments in Economics and Public Policy. An important step of this process is to remove or mask both direct and indirect identifiers, as described in J-PAL’s guide to de-identifying data. For more information on data security, please see our written guides on data security and data publication.
Institutional commitment to ethical research
J-PAL staff and offices have established internal processes for project approval and start-up that ensure IRB compliance--for example, the South Asia office only begins work on a project after receiving documentation of IRB approval. Several offices have also advised local institutions on setting up ethics review boards or have directly supported the creation of an IRB. All J-PAL-funded projects fall under purview of the MIT IRB and are either reviewed by that IRB, or review is formally ceded to another US-accredited IRB. As of March 2021, J-PAL requires that all research studies published on the Evaluations database of our website provide confirmation of having received an IRB approval or an exemption. Effective all rounds of J-PAL funding starting October 1, 2022, and for all projects where data collection is supposed to start after January 1, 2023, J-PAL requires that the project has been reviewed by an IRB with status as an IRB Organization (IORG). An IRB’s status can be found by consulting the database of IORGs.
Last updated October 2022.
These resources are a collaborative effort. If you notice a bug or have a suggestion for additional content, please fill out this form.
We thank William Pariente, Claire Walsh and Sabhya Gupta for thoughtful suggestions and comments. Evan Williams formatted this resource. All errors and omissions our own.
See, e.g., Sussman (2021) for a discussion on how the exclusion of pregnant women from vaccine trials can limit our knowledge about a vaccine’s adverse effects
Additional Resources
-
Feeney, Laura, and Stephanie Lin. 2019. “Ethics in Research with Humans.” Delivered in J-PAL North America’s 2019 Research Staff Training USA. (J-PAL internal resource)
References
Alderman, Harold, Jishnu Das, and Vijayendra Rao. 2016. “Conducting Ethical Economic Research: Complications from the Field.” In The Oxford Handbook of Professional Economic Ethics, edited by George F. DeMartino and Deirdre McCloskey. New York: Oxford University Press.
Ali, Galal A, 2010. “Traffic Accidents and Road Safety Management: A Comparative Analysis and Evaluation in Industrial, Developing, and Rich-Developing Countries.” 29th Southern African Transport Conference, 16-19 August 2010, Pretoria, South Africa.
Alsan, Marcella and Marianne Wanamaker. 2018. “Tuskegee and the Health of Black Men” The Quarterly Journal of Economics, vol 133, issue 1: 407–455. https://doi.org/10.1093/qje/qjx029
Asiedu, Edward, Dean Karlan, Monica P. Lambon-Quayefio & Christopher R. Udry. 2021. “A Call for Structured Ethics Appendices in Social Science Papers.” NBER Working Paper No. 28393
Baicker, Katherine. 2016. “Evaluating Innovative Programs for Low-Income, First-Time Mothers: Katherine Baicker, Harvard.” YouTube video, posted by J-PAL December 2, https://www.youtube.com/watch?v=mjJvrdWM54Y.
Baicker, Katherine, Mary Ann Bates, Margaret McConnell, Annetta Zhou, and Michelle Woodford. Ongoing. “The Impact of a Nurse Home Visiting Program on Maternal and Child Health Outcomes in the United States.” https://www.povertyactionlab.org/evaluation/impact-nurse-home-visiting-program-maternal-and-child-health-outcomes-united-states.
Ball, Kelsey, William Lawson and Tanya Alim. 2013. “Medical Mistrust, Conspiracy Beliefs, & HIV-Related Behavior Among African-Americans.” Journal of Psychology & Behavioral Science, 1., no. 1.
Banerjee, Abhijit, Dean Karlan, and Jonathan Zinman. 2015. "Six Randomized Evaluations of Microcredit: Introduction and Further Steps." American Economic Journal: Applied Economics, 7, no. 1: 1-21. doi: 10.1257/app.20140287
The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects Research. 1979.
Canfield, Julie. 2005. “Secondary Traumatization, Burnout, and Vicarious Traumatization.” Smith College Studies in Social Work, 75, no. 2: 81-101. https://doi.org/10.1300/J497v75n02_06
Conan, Neal. 2010. “Census workers face vitriol and violence.” NPR: Talk of the Nation, June 21, https://www.npr.org/templates/story/story.php?storyId=127988332.
Dickert, Neal, and Jeremy Sugarman. 2005. “Ethical Goals of Community Consultation in Research.” Health Policy & Ethics, 95, no. 7 (July): 1123-1127. 10.2105/AJPH.2004.058933
Duflo, Esther, Rachel Glennerster, and Michael Kremer. 2007. “Using Randomization in Development Economics Research: A Toolkit.” CEPR Discussion Paper No. 6059.
Ellsberg, Mary, and Lori Heise. 2002. “Bearing Witness: Ethics in domestic violence research.” The Lancet, 359, no. 9317: 1599-1604. doi: 10.1016/S0140-6736(02)08521-5.
Feeney, Laura. “Define intake and consent process.” J-PAL North America Evaluation Toolkit. Last accessed May 28, 2020.
Feeney, Laura, and Stephanie Lin. 2019. “Ethics in Research with Humans.” Delivered in J-PAL North America’s 2019 Research Staff Training USA. (J-PAL Internal Resource)
Gelinas, Luke, Emily A. Largent, I. Glenn Cohen, Susan Kornetsky, Barbara E. Bierer, and Holly Fernandez Lynch. 2018. “A Framework for Ethical Payment to Research Participants.” New England Journal of Medicine, 378, no. 8: 766-771. doi: 10.1056/NEJMsb1710591
Glennerster, Rachel. “The complex ethics of randomized evaluations.” Running Randomized Evaluations (blog), April 14, 2014. http://runningres.com/blog/2014/4/9/the-complex-ethics-of-randomized-evaluations. Last accessed July 16 2020.
Glennerster, Rachel. 2017. “The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency.” In Handbook of Experimental Field Economics, 1: 175-243.
Glennerster, Rachel, and Kudzai Takavarasha. 2013. “Randomizing.” In Running Randomized Evaluations: A Practical Guide. Princeton; Oxford: Princeton University Press.
Glennerster, Rachel, and Shawn Powers, 2013. “Balancing Risk & Benefit: Ethical Trade-Offs in Running Randomized Evaluations.” In The Oxford Handbook of Professional Economic Ethics, edited by George F. DeMartino and Deirdre McCloskey. New York: Oxford University Press.
Guilleman, Marilys and Lynn Gillam, 2004. “Ethics, Reflexivity, and ‘Ethically Important Moments’ in Research.” Qualitative Inquiry, 10, no. 2: 261-280. https://doi.org/10.1177%2F1077800403262360
Heard, Kenya, Elisabeth O’Toole, Rohit Naimpally and Lindsey Bressler. 2017. “Real-world Challenges to Randomization and Their Solutions.” J-PAL North America.
Jamison, Julian, Dean Karlan, and Laura Schechter. 2008. “To deceive or not to deceive: The effect of deception on behavior in future laboratory experiments.” Journal of Economic Behavior & Organization, 68, no. 3-4: 477-488. https://doi.org/10.1016/j.jebo.2008.09.002
J-PAL. 2020. “J-PAL Research Protocol Checklist.” Last updated February, 2020. https://drive.google.com/file/d/0B97AuBEZpZ9zZDZZbV9abllqSFk/view.
Kukla, Rebecca. 2007. “Resituating the Principle of Equipoise: Justice and Access to Care in Non-Ideal Conditions.” Kennedy Institute of Ethics Journal, 17, no. 3: 171-202. doi:10.1353/ken.2007.0014
Largent, Emily A., and Holly Fernandez Lynch. 2017. “Paying Research Participants: Regulatory Uncertainty, Conceptual Confusion, and a Path Forward.” Yale Journal of Health Policy & Law Ethics, 17, no. 1: 61-141.
Liu, Katherina, and Natalie Dipietro Mager. 2016. “Women’s involvement in clinical trials: Historical perspective and future implications.” Pharmacy Practice, 14, no. 1: 708. doi: 10.18549/PharmPract.2016.01.708
MacKay, Douglas. 2018. “The ethics of public policy RCTs: The principle of policy equipoise.” Bioethics, 32, no.1: 59–67. https://doi.org/10.1111/bioe.12403
MacKay, Douglas. 2022. “How does scarcity inform ethical withholding of treatment?.” International Initiative for Impact Evaluation blog, February 11.
MacKay, Douglas. 2022. “Policy equipoise and ethical implementation experiments: Evidence of effectiveness, not merely efficacy.” International Initiative for Impact Evaluation blog, February 16.
McKenzie, David. 2013. “How should we understand ‘clinical equipoise’ when doing RCTs in development.” World Bank Development Impact (blog), September 3, 2013. https://blogs.worldbank.org/impactevaluations/how-should-we-understand-clinical-equipoise-when-doing-rcts-development. Last accessed July 15, 2020.
Morello, Carol. 2010. “An unexpected result for some census takers: The wrath of irate Americans.” The Washington Post, June 20. http://www.washingtonpost.com/wp-dyn/content/article/2010/06/19/AR2010061901896_pf.html
Raichlen, D. A., & Gordon, A. D. 2011. “Relationship between exercise capacity and brain size in mammals.” PloS One. doi: 10.1371/journal.pone.0020601
Rao, Kirthi. 2018. “Not lost in translation: Ethical research communication to inform decision-making.” 3ie Evidence Matters blog, September 13. Accessed 30 October 2019. https://www.3ieimpact.org/blogs/not-lost-translation-ethical-research-communication-inform-decision-making. Last accessed June 11, 2020.
Singer, Eleanor, and Robert M. Bossarte. 2006. “Incentives for Survey Participation: When Are They ‘Coercive’?” American Journal of Preventive Medicine, 31, no. 5: 411-418. https://doi.org/10.1016/j.amepre.2006.07.013
Steinert, Janina Isabel, David Atika Nyarige, Milan Jacobi, Jana Kuhnt, and Lennart Kaplan. 2021. “A systematic review on ethical challenges of ‘field’ research in low-income and middle-income countries: respect, justice and beneficence for research staff?” BMJ Global Health, vol. 6, issue 7. http://dx.doi.org/10.1136/bmjgh-2021-005380