Humanitarian Protection Initiative (HPI) Scholars Program
The Humanitarian Protection Initiative (HPI) is invested in creating more opportunities for researchers based in or from low- or middle-income countries (LMIC). Through the HPI Scholars Program, we will make funding and mentoring opportunities available to researchers interested in conducting randomized impact evaluations that fall within HPI’s scope. These opportunities are designed for researchers who hold a PhD, whose primary affiliation is with a university, who are based in or from an LMIC, and who are not yet part of J-PAL’s network.
Announcements
- General information on the scope, funding criteria, and timeline of HPI’s Requests for Proposals (RFP) are available on the HPI RFP webpage and should be carefully consulted by Scholar applicants.
- In our HPI Scholars Program launch webinar on March 6, 2024 we outlined HPI’s scope, Scholars Program details, and tips for applying for funding from HPI. You can find the recording here.
- Register your general interest in J-PAL’s Scholars Programs and related opportunities here.
Research Funding Opportunities
Researchers whose primary affiliation is with a university, who hold a PhD, and who are based at a university in an LMIC or have completed high school or undergraduate studies in an LMIC are eligible to apply for HPI proposal development (up to $10,000) and pilot grants (up to $75,000) as part of HPI’s bi-annual RFP. Please carefully review the information on HPI’s scope and application timelines and process available on the RFP website.
Resources to Develop a Proposal
Applicants are highly encouraged to consult J-PAL’s library of practical research resources on designing and running randomized while developing their proposal.
J-PAL Staff Support During the Application Process
If you have questions at any stage of the process, please contact [email protected] and we can assist you.
Scholar applicants who are successful at the letter of interest stage will be invited to discuss their proposal with J-PAL staff members to strengthen their application. In these calls, applicants are encouraged to bring questions on the further development of their proposal and initial feedback is provided on the project’s scope and design.
Targeted Mentorship for Funded Research Projects
Researchers based in an LMIC and whose proposals receive funding as part of HPI's Scholars Program funding will be paired one-on-one with mentors drawn from J-PAL’s network of affiliated researchers. Depending on the nature of the grant received by Scholars, mentors will provide active support in proposal generation and the implementation of pilot projects, and may advise on topics including effective communication and partnership-building, grant applications, or peer review processes during monthly conversations.
In addition, J-PAL staff will be available to Scholars and their staff to discuss potential challenges and guide them to useful resources available through J-PAL and IPA offices in support of high quality research project implementation.
Capacity Building Opportunities and Additional Materials on Randomized Impact Evaluations
What are randomized evaluations? How are they different from impact evaluations?
A randomized evaluation is a type of impact evaluation that uses random assignment to allocate resources, run programs, or apply policies as part of the study design. Like all impact evaluations, the main purpose of randomized evaluations is to determine whether a program has a causal impact and, more specifically, to quantify how large that impact is.
Impact evaluations measure program effectiveness typically by comparing outcomes of those (individuals, communities, schools, etc.) who received the program against those who did not or those who received a different type of program. There are many methods of doing this, but randomized evaluations have the benefit of ensuring that there are very limited systematic differences between those who receive the program and those who do not, thereby producing accurate (unbiased) results about the effects of the program. For more information, see J-PAL’s introduction to randomized evaluations.
Introductory Training in Randomized Evaluations
J-PAL runs Evaluating Social Programs courses that are based in different locations around the world. This is intended to give an overview of randomized evaluations for a policy practitioner audience. Please check this page for updates on the next course. For those who cannot attend the course in-person, we have lecture recordings, slides, and case study materials for most of the ESP sessions in J-PAL's Teaching Resources
J-PAL also hosts an online training course in designing and running randomized evaluations, J-PAL102x. This course teaches learners how to both design randomized evaluations and implement them in the field to measure the impact of social programs. It is a twelve-week long course and can be audited for free. The course runs three times a year—in spring, summer, and fall.
Take the J-PAL MicroMasters Course
J-PAL and MIT’s Department of Economics designed the MicroMasters® Program in Data, Economics, and Design of Policy (DEDP) to equip learners with the practical skills and theoretical knowledge to tackle poverty alleviation using evidence-based approaches. Through a series of online graduate-level courses, the content combines tools in program evaluation and policy design with a deep understanding of the economics and mathematical principles behind them. The program is unique in its focus on designing and running randomized evaluations to assess the effectiveness of social programs and its emphasis on hands-on skills in data analysis.
Anyone is welcome to audit this online course for free. However, participants are required to pay a fee to take the course exam which is required for course credit.
Further reading on randomized evaluations
The J-PAL team has identified a number of resources that may be useful for Scholars who would like to refine their skills in randomized evaluation and research design.
- For a broad overview of how to design an RCT, please refer to “Using Randomization in Development Economics Research: A Toolkit” by Esther Duflo, Rachel Glennerster, and Michael Kremer (January 2007).
- We encourage all interested individuals to read Rachel Glennerster and Kudzai Takavarasha's book Running Randomized Evaluations: A Practical Guide. Further information can be found on this website.
- Another good resource is “Impact Evaluation in Practice” by Paul Gertler, Sebastian Martinez, Premand Patrick; Laura Rawlings, and Christel M. Vermeersch (September 2016).
- For a compilation of resources that can help researchers run their projects in line with high ethical standards, please refer to the annexes of HPI’s RFP.