How do we know if education technology even works?

Posted on:
Authors:
Britt Neuhaus
Photo: Shutterstock.com

This post first appeared as commentary in Education Week.

In April, the newest National Assessment of Educational Progress scores once again showed minimal progress in U.S. math and reading achievement and a widening achievement gap between our highest and lowest performers. Against this backdrop, educators today are eager for solutions that have long seemed elusive to age-old challenges in education.

Education technology will be part of the solution. Technology today allows teachers to adapt instruction to wide-ranging student needs. Students can use software to receive rapid, specific feedback and work through richer, more realistic problems. Education technology is now ubiquitous in many U.S. classrooms—with annual pre-K-12 spending on software reaching a staggering $8.3 billion, according to a 2015 estimate from the Software and Information Industry Association.

Yet we cannot be blind to the risks. Cash-strapped schools may be tempted to purchase software to keep their most troublesome students occupied, rather than to actually teach them. Today, a majority of products are purchased with no evidence of efficacy. Based on a survey of more than 500 school and district leaders conducted by a 2016 EdTech Efficacy Research Academic Symposium working group, only 11 percent of those responsible for making education-technology adoption decisions demand peer-reviewed research.

Too often, there is usually no plan to learn whether the implemented technology is helping or hurting student learning. Our collective failure to learn what is and is not working may be one of the largest barriers to achieving more rapid progress in U.S. education today.

But this problem can be solved. In 1962, the federal government revolutionized medicine by requiring that drugs undergo rigorous testing before they could be marketed and sold. We have made rapid progress in medicine because we have committed ourselves to systematically evaluating the impact of medical treatments, winnowing down to those with a proven impact, and scaling up only interventions that work. Drug developers always think their new treatments work—until they are found to be ineffective or even harmful.

Likewise, education leaders, developers, and philanthropists are convinced their ed-tech products will help. But when they are wrong, it is our children who pay the price. Particularly for vulnerable students, every hour of instructional time counts. If schools invest resources and time into a software that isn't working, some students may not have a second chance. On the flip side, truly effective programs may languish unnoticed, even if they could help students at a massive scale.

If we are serious about helping students learn, we have an obligation to test educational technology before widespread adoption. But a lack of analytic expertise within school agencies, coupled with the misperception that evaluations are too expensive or take years before results, may make districts wary of evaluation.

This is beginning to change, as leading school districts partner with university-based research centers to conduct rigorous evaluations that capture both timely, short-term outcomes and critical longer-term metrics like graduation rates and college attendance. Over the past 15 years, schools have also built advanced data systems that can drive down costs of data collection and facilitate comparisons between students who are using a specific program with similar students who are not.

Another barrier to ed-tech evaluation has been the classic free-rider problem: Although every school would benefit from knowing whether something works, no single district wants to bear the cost of finding out. Fortunately, a growing number of foundations have taken the lead to try to correct this problem. Among them, the Overdeck Family Foundation (where Britt Neuhaus serves as a program officer), the Laura and John Arnold Foundation, and the Bill & Melinda Gates Foundation are all funding efforts to evaluate what is effective—and what is not.

Finally, there's a pressing need for cultural change among school district leaders to recognize their obligation to rigorously evaluate the education technologies they are adopting. As gatekeepers of school procurement, district leaders should insist that contractors provide rigorous evidence. Even when presented with evidence, district leaders should build rigorous evaluation and performance targets into any contract before implementing a new product at scale. They should initially pilot the software in a subset of schools, establish a comparison group, and only pay the full contract if the students outperform the comparisons by the promised amount.

If we commit to systematic and rigorous evaluation, we may find that certain technologies can truly help make massive inroads in learning. We already have promising evidence that some programs, when paired with thoughtful implementation and supports, can have powerful impacts. For example, randomized evaluations found that ASSISTments—a virtual math-homework support—significantly improved math scores despite only being used for about 10 minutes several days a week. Students with low prior math achievement benefited the most.

Imagine what we might uncover if we committed to evaluating the impact of every technology that comes through our schools and scaling up the most effective programs to the benefit of all our students.

Authored By

  • Thomas Kane

    Walter H. Gale Professor of Education and Economics

    Harvard University Graduate School of Education

  • Philip Oreopoulos

    Wakil Ketua, Education

  • J-PAL logo

    Britt Neuhaus

    Program Officer, Overdeck Family Foundation