Réduire l’ampleur d’un programme évalué
Incorporating evidence into decision-making is not only about scaling up effective programs. Results that show a program doesn’t work can be just as critical. We can learn a great deal from null results: they can change our beliefs or reveal implementation issues, and learning why a social program was not effective can be equally important for policy. Scaling down, changing, or deciding to not scale up an intervention that has been shown to have null or negative effects can free up valuable time and resources and create the opportunity to try something new.
For example, a program in Karnataka, India aimed to reduce health worker absenteeism by introducing a biometric monitoring system that provided attendance data to supervisors in real time, combined with a system of incentives and penalties for unauthorized absences. While the government expected the system to act as a deterrent to absent doctors, researchers found that, due to imperfect enforcement, the monitoring system and the data it generated had limited impacts on attendance. These results contributed to the government’s decision to cancel the planned scale-up of the program, saving millions of dollars in costs and countless hours of staff time needed to run it.