
Randomization’s capacity for flexibility: How to build randomization into a competitive, multi-step application process

When demand for a program exceeds capacity, embedding randomization into an intake process can be fairly straightforward. However, the path to randomization may be less clear when carefully selecting participants is considered key to a program’s success.
Researchers at J-PAL North America worked with Pursuit to design a randomized evaluation of its sectoral employment program, which equips low-income adults in the New York Metropolitan Area with skills to build careers in the technology sector. An earlier post discussed Pursuit’s program model, which includes a competitive application and selection process. This post describes the proposed evaluation design for the Pursuit Fellowship and may serve as a guide for other programs with similar application processes that want to measure impact.
When a program is selective about enrollment, randomization can occur at the end of its application process to maintain selectivity.
Pursuit’s mission is to provide the technical and practical skills necessary to secure high-wage and high-growth careers as software engineers. Applicants are not required to have a background in technology beyond basic computer literacy, and there are no formal educational requirements. Instead, candidates are assessed across characteristics that have historically predicted success in the program, such as determination, passion, and commitment.
Pursuit puts substantial effort into identifying people best suited for its program. Its application progress starts with an eligibility screening, followed by online assessments, a workshop, a team exercise, and an interview. In order to maintain its highly selective process, only those who make it through to the final interview and are deemed qualified would be randomized into the study.

The study design puts informed consent at the beginning of Pursuit’s application process and randomization at the end.
To generate enough qualified participants to run a well-designed study, organizations can relax eligibility criteria or ramp up recruitment efforts.
Studies need a sufficiently large sample of participants to measure differences in outcomes between intervention and comparison groups. Since Pursuit has more applicants than program spots, one option would have been to relax eligibility criteria at the end of the application process and end up with enough people to randomize into an intervention group and a comparison group. This method would have allowed Pursuit to randomize without changing recruitment efforts while also learning about the efficacy of the application threshold. However, Pursuit’s preference was to maintain its high bar for acceptance so that the evaluation would reveal impacts on the intended recipients.
Pursuit opted to expand its recruitment efforts. Randomized evaluations do not need an even split into intervention and comparison groups, meaning Pursuit would not need initial application numbers to double. Pursuit planned to randomly assign one-third of study participants to the comparison group and two-thirds to the intervention group, and to scale outreach accordingly. In addition to reducing the burden of added recruitment, allocating more of the study population to the intervention group also means fewer people may be disappointed by comparison group assignments.
Of course, not everyone who is admitted to Pursuit matriculates. To account for this, the program makes more offers than its intended number of spots. Following this practice, the study also planned to account for expected matriculation rates when allocating the intervention group.
Stratification can help ensure programs build representative cohorts.
Pursuit is committed to training people who are underrepresented in the technology sector. The average salary of incoming participants is US$18,000. In a typical cohort, 70 percent of participants are Black or Latino/a, 50 percent are women or non-binary, 25 percent are first-generation immigrants, and 70 percent do not have a bachelor’s degree.
How can randomization sustain these demographic targets? If these numbers reflect the composition of qualified applicants who make it through to the end of the application process, this will happen naturally. If 25 percent of all study participants are first-generation immigrants, after randomization 25 percent of the intervention group and 25 percent of the comparison group should be first-generation immigrants.
If demographic targets might not be otherwise achieved, we can stratify our randomization by randomizing within demographic groups. If only 15 percent of qualified applicants were first-generation immigrants but the target for representation in the program was 25 percent, stratification could be used to overrepresent these participants in the intervention group.
Organizations engaged in evaluation can still innovate. There is flexibility to sustain changes to application processes and program models throughout the study.
To achieve a sufficient sample size, this evaluation was designed to include multiple cohorts over multiple years. Pursuit may want to modify its application process during the study. For example, Pursuit recently shifted from a cohort-based model (where the application window was open for a few months, and all applicants for a given cohort were assessed together) to a rolling model (where applicants are reviewed in small batches and decisions are made with quicker turnaround times). An evaluation can incorporate these kinds of changes. If this shift happened during the study, randomization simply would have changed cadence and occurred more frequently.
The program itself might also change over the course of the study. This can also be fine for study purposes. Pursuit is always working to improve the participant experience and adapt alongside the rapidly changing industry it trains participants to enter. It would not need to stop innovating or artificially fix the program in time for consistency during the study period. The evaluation will simply yield the average effect of the program’s various forms throughout the course of the evaluation.
While the Pursuit randomized evaluation has not yet launched, the study design offers an example of how embedding randomization into a program’s intake process can be flexible and uphold enrollment preferences. For more insights into the randomization design and considerations that motivated design choices, see this report on the Pursuit website. This research was supported by a grant from WorkRise, a project of the Urban Institute. WorkRise is a research-to-action network on jobs, workers and mobility, and is funded by a collaborative. For more information, please visit www.workrisenetwork.org.
Related Content
How sectoral employment training can advance economic mobility for workers who face barriers to employment

How can we support workers in their search for quality jobs?
