Research Resources

Implementing qualitative methods in the field

Summary

This resource provides an overview of commonly used qualitative tools and details how to integrate them into an RCT. It also covers key considerations when hiring and training qualitative field staff, sampling, and data analysis. Qualitative research includes a broad range of methods that we cannot cover comprehensively within a short resource. More exhaustive resources on using qualitative methods in the field are linked throughout and in the Resources section at the bottom of this page. Readers already familiar with qualitative methods and looking to integrate them into RCT research design may refer to the corresponding resource on integrating qualitative methods throughout the project cycle.

Selecting an appropriate qualitative tool

Deciding on the right qualitative approach to implement is a matter of understanding the study context (i.e., knowing what kind of qualitative data you might be able to collect), knowing which questions you want to answer using qualitative data, and having the time and resources to carry out each method successfully. Here, qualitative data refers to descriptive, non-numerical information capturing observed patterns that can be recorded as text, audio, or images. This type of data is typically collected through interviews, group discussions, or direct observations. This section will detail considerations for choosing a specific qualitative method in the context of running a randomized controlled trial and will cover some of the most commonly used tools.1

In-depth interview

In-depth interviews use open-ended questions to draw out rich and detailed information about an interviewee’s experiences. This type of interview is normally conducted between an interviewee and a highly trained interviewer in a one-on-one guided conversation. Information gathered through in-depth interviews may allow you to explore variations in program delivery as well as examine differences in outcomes between individuals (Sewel 1999). They can be useful if you require a great depth of information from your sample, your research covers a sensitive topic, you have several skilled and well-trained interviewers on your team, and your project timeline allows for the intensive amount of time it takes to conduct the appropriate number of interviews and analyze the data (Frechtling and Sharp 1997). Below are general guidelines for conducting in-depth interviews with your study sample.

Steps for conducting in-depth interviews:

  1. Selecting interviewees: When selecting your in-depth interview sample, you can conduct random sampling or non-random sampling (also referred to as non-probability sampling) depending on the goals of your analysis. When doing non-random sampling, it is important to identify key informants who are likely to give you the richest information about your research question. Sampling is covered in further detail below in the Qualitative sampling section.
  2. Developing an interview guide: An interview guide lists key questions and possible probes for the interview. It should serve as the framework for delivering the interview but is not meant to restrict the interviewer in terms of order or topics covered. Questions should fall under the broad research questions of the study and should be worded to elicit the most detailed answers from interviewees. 
  3. Conducting the interview: Before starting the interview, it is important to first build good rapport with the interviewee so they feel comfortable talking freely. It’s the interviewer’s job to listen intently and to appropriately probe the interviewee to elicit the most relevant information. Divergence from the interview guide is encouraged when pursuing unexpected but relevant ideas that emerge in the course of the interview. Interviewers can probe the interviewee for more details by encouraging them to continue speaking or asking clarifying questions, but should avoid steering the interviewee’s answers (Curry 2015, 12:19).

Resources for conducting in-depth interviews

Focus group discussion

Focus group discussions (FGD) typically consist of 6-10 participants, with a moderator posing a prepared list of open-ended questions to the group. Order and emphasis of the discussion is shaped by participants, which can have the advantage of revealing tensions and divisions among group members but can otherwise risk leading to groupthink if the discussion is not moderated properly. FGDs can allow you to explore social norms and shared experiences among participants, as well as evaluate program processes and implementation (Curry 2015b, 1:35). They are especially useful when the information you want to collect is best drawn out through group interaction, the subject of discussion is not especially sensitive, or when data must be collected quickly with limited staff or funding (Frechtling and Sharp 1997). Below are general guidelines for conducting focus group discussions with your study sample.

Steps for conducting FGDs:

  1. Selecting FGD participants: Participants in each FGD should have a common characteristic, interest, or a shared experience relevant to your research question. It is important to set and follow clear selection guidelines to identify people who are the most knowledgeable about the FGD topic and are sufficiently homogenous (Krueger 2017a). Homogeneity is important in order to avoid group dynamics that might make certain kinds of participants less comfortable sharing their opinions or experiences. For example, if you want to understand the experiences of both children and their parents, it is advisable to conduct FGDs with each group separately. One way to select FGD participants is to identify a number of individuals from the larger sample who meet your selection criteria and then randomly select from that group those to invite to participate in the FGDs (Hofstedt et. al. 2022). 
  2. Developing a topic guide: The topic guide or discussion guide is a list of protocols and open ended questions the moderator will use to carry out the FGD. It should contain an introduction explaining the purpose of the research study, procedures for how the FGD will be conducted, as well as a list of open-ended questions, probes, and notes for the moderator to keep in mind while guiding the discussion. It is important to pilot your topic guide and to make any adjustments before delivering it to FGD participants (Krueger 2017b). 
  3. Conducting the FGD: The location should be comfortable, easily accessible, and non-threatening to all participants. As with in-depth interviews, the moderator should aim to build rapport from the beginning of the discussion so that all participants feel comfortable speaking. When starting the discussion, it is important to clearly introduce the purpose of the FGD and lay ground rules for discussion (Krueger 2017c). It is the moderator’s job to facilitate discussion among participants and minimize group pressure (USAID 2011b). In some cases, it may be beneficial to also have an assistant moderator present to handle logistical considerations such as recording the session and arranging the discussion space (Hofstedt et. al. 2022). The discussion could last about 1.5 to 2 hours but can be as short as 40 minutes depending on the topic (Frechtling and Sharp 1997).

Considerations for moderators of in-depth interviews and FGDs:

  • Familiarize yourself with the question guide so you are prepared to adapt the sequence of questions and add follow-up questions in real time based on participant responses. Leave room to explore relevant topics raised by participants that go beyond your original set of questions.
  • Generate a comfortable atmosphere among participants to encourage spontaneous opinions and uncensored expressions.
  • Show interest, courtesy, and respect for participants' input while remaining neutral, respecting diverse opinions, and valuing participants’ lived experiences.
  • Motivate interview questions theoretically but express them in colloquial prose.
  • Avoid overstepping into psychological support; the focus is on insights, not problem-solving.
  • Refrain from making assumptions; ask for clarification.

Observation

Observation uses the natural research setting to collect firsthand data on program implementation and participant behavior. This method is ideally conducted by a highly trained observer who enters the study community to take structured notes about what they observe. Observation can allow you to collect data on the study setting, how the program itself is being delivered, and how study participants interact with the program, as well as behaviors that might be unintentional or that participants are unwilling to speak about (Frechtling and Sharp 1997). This method can be useful when you want to gather information about the study setting or program implementation, you have well-trained observers on your research team, and your project allows for the intensive amount of time and funds it takes to conduct and analyze the observations. Below are general guidelines for conducting observations with your study.

Steps for conducting observation:

  1. Deciding on level of participation: In ethnography, qualitative researchers may engage in community activities as a direct participant (Frechtling and Sharp 1997). However, for qualitative observation conducted in the context of development economics research, observers are usually spectators refraining from direct participation in the activities of research subjects. As with other data collection methods, observation activities should be reviewed by an IRB to determine the type of informed consent required. 
  2. Developing an observation protocol: The observation protocol is a guide used to focus the topics of observation to ensure some centering on the research question and to standardize data collected across observations. This may include prompts to take detailed notes of the study setting, people who are present at the observation site, interactions between individuals, delivery of the program, specific behaviors relevant to the research question, as well as any unanticipated events (Frechtling and Sharp 1997). The protocol may also include checklists and rating scales for anticipated events and behaviors. 
  3. Conducting the observation: Observers must be well-trained and somewhat familiar with the study community, setting, and research topic in order to be able to pick up on and correctly interpret subtle cues and non-verbal communication from participants. It is important to take notes on what is included in the observation protocol, but it is also important to take plenty of additional field notes, which can contain rich information about unanticipated events. If participants know they are being observed, they might alter their normal behavior, leading to inaccurate accounts. Observations may need to be done over long periods of time to mitigate these effects (Price et. al. 2017).

Considerations when conducting participant observation:

  • The observer’s presence might affect participant behavior, leading to Hawthorne effects.
  • Observations may need to be conducted over long periods of time to gain a holistic understanding of the setting or program.
  • Conducting systematic observation and analyzing data from observation can be costly and time consuming.

Additional considerations when conducting qualitative research in the field

  • When drafting your qualitative instrument, clearly define the goal or target information that you wish to get from each question asked. This is especially helpful in qualitative in-depth interviews or focus-group discussions as interviewers are encouraged to go off script and probe based on the interviewee’s answers. Having the interviewer clearly understand each question's intended goal helps them stay on track when probing and continue to draw out the most relevant information.
  • It’s important to understand and note the nuance of how interviewees answer questions. Interviewers should pay attention not only to what the interviewee says but also how they convey it. Were they confident or nervous when answering questions? Were they interviewed in a setting where they might be uncomfortable providing specific answers? Were there other people present that might influence how they answer your questions? It is essential to note these details and take them into consideration when analyzing the results of your qualitative work. 
  • Capture as much detail as possible. Qualitative methods can allow researchers to highlight heterogeneity in greater detail, offering more personalized or unique insights into a study. Audio recordings and quality transcripts (the more detailed, the better) will help teams dig deeper into the information provided by interviewees. Bailey (2008) gives some practical guidance on transcribing interviews in detail. After interviews or focus groups, it is also considered best practice for interviewers and moderators to note down impressions, ideas of key themes, and other relevant information that may not be captured in the transcript itself.

Hiring and training qualitative field staff

It is important to have skilled and well-trained researchers or research staff carry out any of the above mentioned qualitative tools. This type of data collection relies heavily on the competencies of the interviewer/facilitator/observer to elicit or observe relevant and detailed information from study participants. Successful qualitative researchers/research staff should be knowledgeable about the research topic, listen intently, clearly convey questions and ideas, understand which key events to observe, and be empathetic and sensitive when interacting with participants.

This requires training specifically on (Boyce and Neale 2006):

  • How to establish rapport with participants
  • Effective and appropriate probing
  • How to steer conversations without leading
  • Detailed, accurate, and objective note taking while in the field
  • Managing group dynamics in the case of FGDs
  • How to study and take notes on a setting in the case of observations

Feedback on these skills during training is crucial to ensure that staff are proficient and confident in successfully implementing the selected method in the field. Compared to close-ended surveys where staff are trained to deliver the instrument as written, qualitative data collection gives much more discretion to field staff to pursue off-script topics as they come up. Other processes for hiring and training qualitative field staff are similar to those for training surveyors to collect quantitative data, including covering research protocols, familiarizing staff with the data collection instrument, and practicing in the field (please see the Surveyor hiring and training resource for more guidance).

Collaborating with more experienced data collection companies/consultants in qualitative research can also help build strategies for conducting qualitative work, particularly for researchers who are unfamiliar or have limited experience in using such methods. Working with these firms can offer the research team valuable insights into how qualitative studies are designed and conducted in a field setting, and how this process differs from quantitative field surveys.

Qualitative sampling

Qualitative samples can be selected randomly or non-randomly depending on the goals of the qualitative research (e.g., measure outcomes or contextualize quantitative findings). Your qualitative sample may also differ significantly in size from your main sample. 

If your aim is to investigate causal explanations, drawing a random sample may be most appropriate. Keep in mind that if using qualitative data as outcome measures in the RCT, you'll need to ensure adequate statistical power to detect significant effects just as you would with quantitative outcome measures (see the Power calculations resource for guidance on determining the appropriate sample size). Alternatively, it may be appropriate to draw a smaller random sample if the goal is to conduct thematic analyses to explore mechanisms or simply ground quantitative findings with qualitative data. Though this number may be difficult to determine a priori, a few quantitative tools have been developed to determine randomly selected qualitative sample sizes for thematic analysis (Fugard and Potts 2014; Lowe et. al. 2018).

In other cases, you may want to identify a theoretically motivated set of target individuals to generate hypotheses or explore program processes. Here, you might want to pursue non-random sampling and cover a breadth of participants (Lynch 2013). Non-random populations of interest for qualitative data collection may include outliers, individuals with specialized expertise, individuals who can inform about specific events or outcomes, or voices missing from existing accounts. 

Methods that use this sort of non-random sampling include (Magsamen-Conrad 2023):

  • Purposive sampling: Individuals are selected based on characteristics relevant to the analysis. For example, extreme-case sampling selects outliers in an effort to explain why they responded differently (Spillane et. al. 2010).
  • Quota sampling: Proportions are set so that the sample includes specific numbers of certain segments of the population.
  • Convenience sampling: Individuals who are easily accessible to the researcher are included in the sample.
  • Snowball sampling: Participants refer or recruit other individuals to be part of the sample.

Your qualitative sample should be selected from the study’s broader population of interest. If it includes individuals from your study sample, be sure to consider how participation in the qualitative research may impact participants’ behavior or responses to future survey questions. It is also important to consider cultural etiquette when recruiting respondents through networks and referrals (e.g., in some contexts, it may be advisable to seek permission from community leaders before contacting community members).

Sample size can be determined dynamically based on meeting a point of theoretical saturation, which occurs when enough data is gathered to fully support a theoretical model or when no new themes emerge from additional data (Saunders et. al. 2018). In the case of FGDs, you may aim to conduct around three discussion rounds per research question. Although a few rounds might be sufficient to identify key themes, additional rounds might be needed to fully understand those themes (Hennink et. al. 2019). Guest et al. (2020) have developed a method for determining a point of theoretical saturation using a flexible model that can be implemented during or after data collection. Although theoretical saturation can serve as a helpful framework to determine an appropriate qualitative sample size, this concept has been debated in recent years among qualitative researchers (Tight 2023).

Data quality checks

Similar to data quality checks for survey and administrative data, it is important to check the quality of qualitative data (e.g., transcripts from interviews or focus groups) to ensure the recorded data accurately reflect statements or behaviors of research participants. However, due to differences between quantitative and qualitative data collection processes, appropriate techniques for qualitative data quality checks may differ. 

  • High-frequency checks (HFCs): HFCs are normally used to check incoming survey data in order to detect errors or data fraud, monitor survey progress, and track respondents.  For smaller-scale qualitative data collection, HFCs may not be necessary if the research team can directly supervise interviewers and review data as it is collected. However, for large-scale qualitative data collection (e.g., when open-ended questions are asked of all participants in a full RCT sample as part of baseline and/or endline data collection), HFCs can be a useful tool to diagnose potential errors such as missing data, duplicate respondent IDs, and irregular response length. Location and audio audits can be especially useful for qualitative data collected with tablets or smartphones. For more guidance on HFCs, see our research resource on data quality checks.
  • Back-checks: Back-checks are short, audit-style surveys of respondents who have already been surveyed. For a closed-ended quantitative survey, we generally expect respondents to answer factual questions similarly in follow-up back-check interviews as they did in the original survey. Due to the open-ended nature of qualitative research, we don’t always expect a respondent to answer a question in the same way in a follow-up interview. Therefore, back-checks should not be used to confirm the consistency of qualitative responses. Instead, back-checks can be used to confirm that the respondent was actually interviewed and that key topics were covered. If audio is recorded (with informed consent) in the initial interview or focus groups, audio audits would be preferable to back-checks.
  • Spot-checks: Spot-checks are unannounced visits by senior field staff to observe the quality of interviews.  Spot-checks can be useful in qualitative data collection to ensure interviewers are interviewing the correct respondents in the correct location and are following proper qualitative protocols. In particular, spot checks can be used to provide interviewers with feedback on how they’re conducting the interviews or focus groups. For example, are they leaving enough space for respondents to share potentially relevant information that goes beyond the initial set of questions? Are they careful to avoid asking leading questions?
  • Member checking: Member checking involves asking participants in qualitative research to validate their own data, checking for accuracy and validity (Kallos 2023). Member checking activities can include returning interview transcripts to participants, re-interviewing respondents using the interview transcript data, or returning analyzed data (Birt et al. 2016). Birt et al. (2016) summarizes ways in which member checking has been used in health and educational research and introduces the Synthesized Member Checking (SMC) technique. SMC enables participants to comment on analyzed data to check whether the analysis resonates with the participants’ experiences.

Analyzing qualitative data

Qualitative data analysis does not need to occur at distinct research stages and can depend on how qualitative methods are integrated into the randomized evaluation. Rigorous qualitative data analysis is systematic, starting with condensing the qualitative data into a meaningful form, arranging the condensed data to draw out patterns, and drawing conclusions from observed patterns (Frechtling and Sharp 1997). When condensing the data, audio recordings and field notes are transcribed and organized. The transcripts are then coded, in which short labels are attributed to key themes that emerge from the data. These codes are then collapsed into categories and organized into a code structure which is used to guide further coding and analysis (Curry 2015c, 2:24). In recent years, thematic coding is often done through qualitative data analysis software such as NVivo, ATLAS.ti, or MAXQDA.

Other qualitative analysis methods, such as narrative or discourse analysis, place less emphasis on coding qualitative data to formally categorize and count pre-specified themes and instead analyze the data within the narrative or social context it was recorded to derive meaning (Josselson and Hammack 2021). Analysis is normally an iterative process; as new patterns emerge, previous patterns or assumptions are updated. You may also want to update the interview guide with new questions to explore emerging themes or sample additional participants based on newly identified relevant characteristics. For more guidance on qualitative data analysis, please see chapter 4 of the NSF’s guide on Analyzing Qualitative Data.


Intercoder reliability
When performing human coding of qualitative texts, it is best practice to have multiple individuals code the same text. Intercoder reliability (ICR) can then be measured to check the consistency of codes. ICR measures the degree to which different coders assign the same code to various units of data. It can be calculated as the percentage of times coders agree on a code for specific data units though there are other statistical tests that can create less biased estimates of ICR. O’Connor and Joffe (2020) have created a detailed guide on ICR procedures and considerations. While there is no consensus on a minimum ICR, many researchers use 80 percent as a minimally satisfactory threshold. 

When using ICR to assess the quality of data analysis, it is important to decide on:

  • the number of coders to independently code the data–you will need at least two to determine ICR;
  • the unit of analysis within the qualitative data–examples of possible units may include every response to a question or every recorded sentence; and
  • how much data you would like to be coded multiple times.

While ICR estimates can be helpful to determine the quality of analysis, O’Connor and Joffe (2020) suggest that ICR can also be used to “foster reflexivity and dialogue within the research team” and to iterate on and refine concepts that have low consensus among coders.


Natural language processing
As an alternative to human coding of qualitative data, natural language processing (NLP) can be used to classify texts. NLP is a branch of artificial intelligence that focuses on teaching machines to understand human language. NLP can be especially useful when working with a large corpus of qualitative data that would be too large for most research teams to code manually. NLP models can also be used to discover patterns in the data that human coders might miss. For example, an unsupervised machine learning approach to structural topic modeling can allow researchers to gain insights into features of text without imposing any assumptions or biases that may come from the researchers’ priors. A common application of NLP in social science research is to identify topics from clusters of words that commonly co-occur in the data. 

There are many NLP programs researchers can use to process their qualitative data including open source and subscription based softwares. One of the most widely used programs is NVivo, which can help organize and analyze a wide range of data (e.g., text, audio, image, and video) (Friedman School of Nutrition Science & Policy 2023). With the proliferation of AI tools in recent years, NLP is a growing field and can allow for quicker analysis of larger amounts of qualitative data. For more discussion of machine learning approaches to analyzing text as data in the social sciences, see Grimmer, Roberts & Stewart (2021) and Ash and Hansen (2023).

Resources for NLP

Acknowledgments

We thank David Torres Leon, Nicolas Romero Bejarano, William Parienté, Sarah Kopper, Diana Horvath, and JC Hodges for helpful comments. All errors are our own.

1.

For general advice on selecting an appropriate qualitative tool for program evaluations, see the National Science Foundation’s (NSF) Overview of Qualitative Methods and Analytic Techniques.

    Additional Resources
    1. Frechtling, Joy, Laurie Sharp.1997. “User-Friendly Handbook for Mixed Method Evaluation.” Alexandria, VA: National Science Foundation.

    2. Mack, Natasha, Cynthia Woodsong, Kathleen M Macqueen, Greg Guest, and Emily Namey. 2005. “Qualitative Research Methods: A Data Collector’s Field Guide.”

    3. edited by Norman K. Denzin, Yvonna S. Lincoln. 2011. “The SAGE Handbook of Qualitative Research.” Thousand Oaks: SAGE.

    4. Bleich, Erik, and Robert Pekkanen. 2013. "How to Report Interview Data." In Interview Research in Political Science, edited by Layna Mosley, 84-105. Ithaca: Cornell University Press. http://www.jstor.org/stable/10.7591/j.ctt1xx5wg.

    5. ATLAS.ti. (n.d.). Video Tutorials. Retrieved from https://atlasti.com/video-tutorials

    6. Kapiszewski, Diana, Lauren M. MacLean, and Benjamin L. Read. 2015. "Interviews, Oral Histories, and Focus Groups." In Field Research in Political Science: Practices and Principles, 180-210. Cambridge: Cambridge University Press.

    7. MITx course Qualitative Research Methods: Conversational Interviewing (MITxT 21A.819.1x)

    8. MITx course Qualitative Research Methods: Data Coding and Analysis (MITxT 21A.819.2x)

    Ash, Elliott, and Stephen Hansen. 2023. “Text Algorithms in Economics.” Annual Review of Economics 15 (Volume 15, 2023): 659–88. https://doi.org/10.1146/annurev-economics-082222-074352.

    Bailey, Julia. 2008. “First Steps in Qualitative Data Analysis: Transcribing.” Family Practice 25 (2): 127–31. https://doi.org/10.1093/fampra/cmn003.

    Birt, Linda, Suzanne Scott, Debbie Cavers, Christine Campbell, and Fiona Walter. 2016. “Member Checking: A Tool to Enhance Trustworthiness or Merely a Nod to Validation?” Qualitative Health Research 26 (13): 1802–11. https://doi.org/10.1177/1049732316654870.

    Boyce, Carolyn, and Palena Neale. 2006. “Conducting In-depth Interviews: A Guide for Designing and Conducting In-Depth Interviews for Evaluation Input.”

    Center for Disease Control and Prevention. 2018. “Data Collection Methods for Program Evaluation: Observation.” Evaluation Briefs No 16. https://www.cdc.gov/healthyyouth/evaluation/pdf/brief16.pdf

    Curry, Leslie. 2015a. “Fundamentals of Qualitative Research Methods: Interviews (Module 3).” Yale University. June 24, 2015. Educational video, 22:16.

    Curry, Leslie. 2015b. “Fundamentals of Qualitative Research Methods: Focus Groups (Module 4).” Yale University. June 24, 2015. Educational video, 21:35.

    Curry, Leslie. 2015c. “Fundamentals of Qualitative Research Methods: Data Analysis (Module 5).” Yale University. June 24, 2015. Educational video, 17:11.

    Dawson, Susan, Lenore Manderson, Veronica L. Tallo. 1993. “A Manual for the Use of Focus Groups.” International Nutrition Foundation for Developing Countries & UNDP/World Bank/WHO Special Programme for Research and Training in Tropical Diseases.

    Friedman School Nutrition Science & Policy. 2023. "Qualitative Data Analysis and Natural Language Processing with NVivo and Research AI." September 28, 2023. Educational video, 1:28:27.

    Frechtling, Joy, Laurie Sharp.1997. “User-Friendly Handbook for Mixed Method Evaluation.” Alexandria, VA: National Science Foundation. https://www.nsf.gov/pubs/1997/nsf97153/start.htm.

    Fugard, Andrew J.B., and Henry W.W. Potts. 2015. “Supporting Thinking on Sample Sizes for Thematic Analyses: A Quantitative Tool.” International Journal of Social Research Methodology 18 (6): 669–84. https://doi.org/10.1080/13645579.2015.1005453.

    Grimmer, Justin, Margaret E. Roberts, and Brandon M. Stewart. 2021. “Machine Learning for Social Science: An Agnostic Approach.” Annual Review of Political Science 24 (Volume 24, 2021): 395–419. https://doi.org/10.1146/annurev-polisci-053119-015921.

    Guest, Greg, Emily Namey, and Mario Chen. 2020. “A Simple Method to Assess and Report Thematic Saturation in Qualitative Research.” PLoS ONE 15 (5): e0232076. https://doi.org/10.1371/journal.pone.0232076.

    Hennink, Monique M., Bonnie N. Kaiser, and Mary Beth Weber. 2019. “What Influences Saturation? Estimating Sample Sizes in Focus Group Research.” Qualitative Health Research 29 (10): 1483–96. https://doi.org/10.1177/1049732318821692.

    Hofstedt, Brandon, Bill Ryan, and Mohammad Douglah. 2022. “Focus Groups.” Community Economic Development. Accessed August 6, 2024.

    Josselson, Ruthellen, and Phillip L. Hammack. 2021. “Conceptual Foundations for the Method.” In Essentials of Narrative Analysis, 3–15. Essentials of Qualitative Methods. Washington, DC, US: American Psychological Association. https://doi.org/10.1037/0000246-001.

    Kallos, Alecia. “What You Need to Know about Member Checking.” 2023. Eval Academy. February 27, 2023.

    Krueger, Richard. 2017a. “06 Examples of Focus Group Questions.” Richard Krueger. May 1, 2017. Educational video, 7:28.

    Krueger, Richard. 2017b. “07 Locating Focus Group Participants.” Richard Krueger. May 1, 2017. Educational video, 9:38.

    Krueger, Richard. 2017c. “10 Moderating Skills During the Group.” Richard Krueger.  May 1, 2017. Educational video, 9:38.

    Lowe, Andrew, Anthony C. Norris, A. Jane Farris, and Duncan R. Babbage. 2018. “Quantifying Thematic Saturation in Qualitative Data Analysis.” Field Methods 30 (3): 191–207. https://doi.org/10.1177/1525822X17749386.

    Lynch, Julia F. 2013. “Aligning sampling strategies with analytic goals.” in Mosley, Layna. Interview Research in Political Science. Cornell University Press. http://www.jstor.org/stable/10.7591/j.ctt1xx5wg.

    Mack, Natasha, Cynthia Woodsong, Kathleen M Macqueen, Greg Guest, and Emily Namey. 2005. “Qualitative Research Methods: A Data Collector’s Field Guide.”

    Magsamen-Conrad, Kate. 2023. “Non-Probability Sampling.” Introduction to Social Scientific Research Methods in Communication (3rd Edition). University of Iowa.

    O’Connor, Cliodhna, and Helene Joffe. 2020. “Intercoder Reliability in Qualitative Research: Debates and Practical Guidelines.” International Journal of Qualitative Methods 19 (January). https://doi.org/10.1177/1609406919899220.

    Price Paul C., Rajiv Jhangiani, I-Chant A. Chiang, Dana C. Leighton, and Carrie Cuttler. (2017). “Observational Research.” Washington State University.

    Saunders, Benjamin, Julius Sim, Tom Kingstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. “Saturation in Qualitative Research: Exploring Its Conceptualization and Operationalization.” Quality & Quantity 52 (4): 1893–1907. https://doi.org/10.1007/s11135-017-0574-8.

    Sewell, Meg. 1999. “The Use of Qualitative Interviews in Evaluation.” The University of Arizona.

    Spillane, James P., Amber Stitziel Pareja, Lisa Dorner, Carol Barnes, Henry May, Jason Huff, and Eric Camburn. 2010. “Mixing Methods in Randomized Controlled Trials (RCTs): Validation, Contextualization, Triangulation, and Control.” Educational Assessment, Evaluation and Accountability 22 (1): 5–28. https://doi.org/10.1007/s11092-009-9089-8.

    Stanford University National Center for Postsecondary Improvement. 2003. “Tools for Qualitative Researchers: Interviews.” Accessed August 6, 2024.

    Taylor-Powell, Ellen, and Sara Steele. 1996. “Collecting Evaluation Data: Direct Observation.” University of Wisconsin Extension.

    Tight, Malcolm. 2023. “Saturation: An Overworked and Misunderstood Concept?” Qualitative Inquiry, July, 10778004231183948. https://doi.org/10.1177/10778004231183948.

    USAID Center for Development Information and Evaluation. 2011a. “Performance Monitoring and Evaluation Tips: Conducting Focus Group Interviews.” 

    USAID Center for Development Information and Evaluation. 2011b. “Performance Monitoring and Evaluation Tips: Using Direct Observation Techniques.”

    In this resource