Affiliate Spotlight: Elizabeth Linos on understanding evidence use in government
In this Affiliate Spotlight, Elizabeth Linos, Faculty Director of The People Lab at the Harvard Kennedy School of Government, discusses her research interests in understanding the government workforce, improving service delivery, and improving the process of evidence-based policymaking.
What research topics are most interesting to you and The People Lab?
The questions that get me excited are the “how” questions. Once we have a policy design or an evidence-based approach to address large social challenges, how are we actually going to implement it? How are we going to support the public sector in scale-up?
Most of my research is through The People Lab, which was set up to support the government and does cutting-edge research on the people of government and the communities that they’re called to serve. We do research in three big spaces. The first is thinking about the government workforce: how to recruit, retain, and support government and frontline workers. The second is thinking about how to improve service delivery, primarily how to reduce the burdens that low-income households face when interacting with the government. The third bucket of research tries to take a step back and think about how we can improve the process of “evidence-based policymaking,” where we ask questions about how evidence actually gets adopted by policymakers.
What kind of gaps have you seen between evidence and policy implementation?
Until recently, the big gap was, ”Do we even have the evidence?” We needed to figure out what would be effective using rigorous methodology, like randomized evaluations, to determine causal effects. But now, with centers like J-PAL and The People Lab, we’re starting to build an evidence base about what works in various policy areas. So now the question shifts to, “What do we do with that information?”
There are many potential hurdles or barriers that policymakers face when trying to adopt existing evidence. Those barriers are part of a bigger question of how we get policymakers, political actors, and practitioners to use that evidence, appreciate differences in research quality, translate it into their own context, and ultimately adopt it at scale.
Your recent study on bottlenecks for evidence adoption investigates this gap between evidence and implementation. What motivated this study and what were the high-level findings?
From previous work with my co-authors, we knew that on average, across different randomized evaluations at the federal level and at the local level, the impacts of behavioral nudges run by the government were positive. That was an exciting finding—it meant that these approaches were probably good for policymakers to integrate into their work. So the next questions were, “Are policymakers actually adopting these interventions? Why or why not?”
To answer this, we analyzed the adoption of evidence from seventy randomized evaluations across the United States. Our first finding was that about 30 percent of treatments were adopted within the first five years after the trial was run. This was surprising because even though government agencies themselves ran these trials, allowing favorable conditions for evidence adoption, evidence adoption remained low.
Second, we found that the main predictor of evidence adoption was whether the intervention was delivered through existing channels—not the strength of the evidence, as we may have hoped. This finding indicates that the kinds of interventions that are more likely to be adopted are often not the newest or most innovative interventions. Instead, they are incremental improvements to existing structures.
What are the broader implications of these findings?
First of all, it shows us that we need more evidence on how we can ensure that good ideas are scaled or adopted, and I hope to explore this further in the future.
It also demonstrates that researchers have a role to play in bridging the gap between evidence and implementation. Even though evidence exists, that doesn’t always mean that it is accessible for policymakers to implement. We have to recognize the existence of other barriers, which can be political, behavioral, or otherwise. It’s the researcher’s role to consider those barriers in both the kinds of trials they design and in supporting governments throughout implementation processes. On the front end, we need to ensure that organizational partners have the infrastructure and the buy-in to take an idea to scale before we test it. We also need to do the work after evidence is generated to present results in a way that is relevant and easy for policymakers to digest.
Do you have any ongoing research projects that you’re excited about?
Currently, we’re working more on psychological barriers—the stigmas or shame that people face when interacting with their government. While there is existing work on how to reduce the complexity of administrative and bureaucratic processes when trying to access programs, we don’t have substantial evidence on trust or stigma. This is problematic, since the pervasive societal stigma against people who are perceived as poor and against people who use government assistance deeply affects the delivery of government services.
We’re also doing work on frontline workers’ burnout, how to reduce it, and how it impacts turnover, service delivery, or mindsets about the people those government agencies serve.
Do you have any final thoughts to share?
For a long time, people of my generation were told that we had to pick between doing interesting and difficult scientific work or making a difference in the “real world.” It’s really exciting to be at a point where both are possible, particularly at places such as The People Lab and J-PAL. There are scholars who are just as relevant in the applied policymaking world as they are in the academic community, and that’s really exciting to me.