The Impact of Feedback on University Student Performance in Spain
- Higher education and universities
- Students
- Enrollment and attendance
- Student learning
- Information
University students are increasingly demanding more feedback on their performance relative to their peers, yet little is known on the impact of this feedback on student performance. Researchers shared information on students’ relative academic standing among university students in Spain to evaluate the impact of this information on student performance and satisfaction. Prior to the information intervention, students tended to underestimate their relative academic standing. Providing students with information on their relative standing led to a short-term decrease in academic performance and an increase in satisfaction.
Policy issue
It is common practice to provide individualized feedback in a variety of settings, including in education, where students often receive information on their absolute performance or relative to their peers. At university, within their large student cohorts, students are often unaware of their academic standing. As a result, students are increasingly demanding more feedback and greater transparency on their individual performance. This information might be useful when selecting courses or majors or when determining how much effort to exert on their studies.
Despite the importance of university education and its consequences, few studies have explored the impact of ranking information on student performance, especially when accounting for students’ prior beliefs about their performance relative to others in their cohort. Can providing university students with relative performance feedback help improve student achievement Spain?
Context of the evaluation
The study took place among students at the University Carlos III in Madrid, Spain, which offers the most selective degrees in the region based on its required minimum entry grade. Just over half of the students in the evaluation study were women, and nearly all were Spanish nationals. Students were enrolled in the Spanish track of one of four four-year degrees—Business, Economics, Finance, and Law—or a six-year degree in Business and Law.
At the end of each semester, students received information on the grades they obtained in each subject. The university summarized this information through an official measure of accumulated GPA (AGPA), which students could also access at any point in time on the university’s online platform. Students did not receive information on their position in the distribution of AGPAs, relative to other students, or about the AGPA of any other student.
While students were not explicitly rewarded for their relative performance, performing well could lead to better placement on study-abroad programs, a good reference letter that would help them obtain an internship, or better labor market outcomes.
Details of the intervention
Researchers partnered with the Universidad Carlos III to evaluate the impact of providing students with information on their relative academic standing on subsequent performance and satisfaction. Out of 977 students, they randomly assigned 354 students to an intervention group and the remaining 623 students to a comparison group who received information on their own performance (as is the norm). The study took place over three academic years.
In addition to information on their own performance, every semester, intervention group students also received feedback on their performance relative to their peers in the same degree. To access the information, students in the intervention group received an email with a link to a personalized web page where they could find feedback on their relative performance. Emails were sent out between the end of classes for the semester and the beginning of the exam period.
To measure the impact of the intervention, researchers utilized the full anonymized administrative record of each student in the university. In addition, university staff conducted (tutorial group level) teaching evaluations on student course satisfaction, grading standards, and self-reported effort. Researchers also administered surveys on students’ knowledge of their relative position in the grade distribution to second year and graduating students.
Results and policy lessons
Prior to the information intervention, students tended to underestimate their relative academic standing. Providing students with information on their relative standing led to a short-term decrease in academic performance and to an increase in satisfaction.
Students showed interest in knowing their relative standing: Most students, especially those with higher AGPA’s, demonstrated an interest in knowing their relative ranking. Within the intervention group, 72 percent of students opened the link provided in the email and viewed their relative standing at least once. The average student checked the ranking four times during the evaluation. In the top quartile, nearly 90 percent of students accessed the information, while in the bottom quartile less than half did. This might be in part explained by high-performing students having more incentives to check their relative standing: Being top of the class is realistically attainable for high-performing students, which makes information on their relative standing more valuable to them.
Many students underestimated their relative performance: At the beginning of their second year, students tended to be uninformed and underestimate their academic position in the grade distribution. The average student was off by 22 percentiles and underestimated her relative ranking by 18 percentiles. Underestimation was more common among women, students with low high school grades, and students who received relatively higher grades during their first year in university.
Even in the absence of any intervention, students improved their knowledge about their relative ranking. However, the intervention further decreased the gap between their expected and true position in the ranking. At the end of students’ fourth year, the average error was equal to 9 percentiles among students in the intervention group and 15 percentiles among students in the comparison group.
Student performance decreased in the short term: In the short term, students who received access to information on their relative standing performed worse than those in the comparison group: They passed 0.36 fewer exams than their counterparts who did not receive the information and received lower grades (a decrease of 0.20 standard deviations). This worsened performance was concentrated among students who found out that their ranking was higher than they originally thought. Students who received this “positive” news also passed 0.47 fewer exams during their second year relative to similar students in the comparison group.
Researchers find no evidence of an effect on performance in the long run. By the third and the fourth years, there were no differences in performance between the intervention and the comparison groups. The decreasing impact of the intervention over time may have been partly due to the fact that the intervention provided a relatively large amount of information in the first period as students were initially unaware of their relative ranking but had decreasing informational content over time. Additionally, the comparison group tended to gain more accurate information concerning their relative performance over time.
Student satisfaction increased: Students who received the information were more satisfied (approximately one-third of a standard deviation higher, with an average satisfaction rating among all students of 3.63 out of 5) with the quality of the courses, suggesting that students' satisfaction increased when they learned that their relative performance was better than they had expected.
These results suggest that in an environment where there is growing pressure to provide more information, providing ranking information might have negative consequences on student performance. Differences in prior beliefs can play a key role in students’ response to performance feedback.
Given the negative impact on student performance, the university decided not to scale up the program. While the university appreciated that satisfaction improved thanks to the intervention, officials thought that it was risky to improve satisfaction at the expense of performance, even if on average the negative effect was not long-lasting.