Dylan Wiliam: Let’s look again at research on feedback

Teachers are bombarded with research on feedback – but is it really relevant? Dylan Wiliam gives his expert view
11th June 2021, 10:00am

Share

Dylan Wiliam: Let’s look again at research on feedback

https://www.tes.com/magazine/teaching-learning/general/dylan-wiliam-lets-look-again-research-feedback
Dylan Wiliam: Teachers Need To Take A Fresh Look At Research On Feedback

Advice for teachers about feedback is not hard to find.

Books, articles, postings on social media and professional development sessions confidently assure teachers that feedback should be specific, positive and immediate. 

While effective feedback does often have these features, a quick survey of the research on feedback shows that research in this area is nowhere near as clear cut as these assurances would suggest, for several reasons. 

Research on feedback: The age-group issue

The first, and possibly most important, reason why feedback research often fails to provide useful guidance for teachers is that most - Maria Ruiz-Primo and Min Li estimate about 75 per cent - published feedback studies are conducted on university students.

In addition, in most of these studies, feedback is a single event, lasting minutes. Students come into a laboratory, are tested, given feedback, tested again and dismissed. 

While such studies might provide useful insights into what kinds of feedback are likely to be effective, generalisations as to what might work with five-year-olds, as opposed to 18-year-olds, are difficult, if not impossible.

Short timespans

A second problem with most feedback research is that most studies tend to take place over relatively short periods of time (often only a few weeks). This is an issue for a couple of reasons. 

Firstly, studies conducted over short time periods lend themselves to measuring the impact of the feedback on performance, rather than learning. 

As the work of Elizabeth and Robert Bjork and others has shown, performance and learning are different. Performance describes how well a student completes a learning task, while learning describes long-term changes in capability. 

The two are also sometimes inversely related. High levels of performance in the learning task often result in less long-term learning, while relatively poor performance in the learning task can produce greater long-term learning.

As Kluger and DeNisi pointed out in their 1996 review of more than 3,000 feedback studies published between 1905 and 1995, feedback that improves student performance by making the students more dependent on feedback is unlikely to be successful. Feedback that is too specific, that tells a student exactly what to do to, may improve the work, but is unlikely to improve the learner.

Yet even where research studies do involve school-age students, and the study is sufficiently long enough to measure lasting effects, drawing conclusions is still difficult. 

Many studies look only at correlations between feedback and student achievement. This means that even if increases in achievement follow the feedback, it is impossible to determine whether the feedback caused the change. 

Furthermore, even in randomised-control studies, if a feedback intervention is shown to increase student learning, those studies are rarely reported in enough detail for us to be sure exactly what form the feedback took. And even the best designed studies find that feedback is more effective for some students than for others, without providing any clues about the cause of such differential effects.

Should we disregard research on feedback?

At this point, it would be understandable to conclude that the research on feedback is such a mess that it is of no use at all, and teachers would be better advised to just go with their hunches, but I believe that such a conclusion would be wrong, and harmful to our students. 

We can draw powerful, useful messages from the existing research on feedback, but we need to do so thoughtfully. While feedback has been shown to improve achievement even over the longer term on average, the effects in different studies vary widely. 

A particularly surprising finding from Kluger and DeNisi’s review was that in 38 per cent of the well-designed studies they analysed, giving people feedback actually made things worse; the learners would have done better if they had received no feedback at all. 

This is why we need to look at relevant studies in depth, looking in detail at whether the findings could be expected to apply in typical classrooms. This process does entail a degree of subjectivity, but in a field as complex as feedback, that is inevitable.

An accessible guide

And this is why the latest guidance report from the Education Endowment Foundation, Teacher Feedback to Improve Pupil Learning, is so welcome. By drawing together the relevant research, focusing in particular on those studies that are most likely to be applicable to compulsory schooling, the report provides teachers with an easily accessible guide to harnessing the power of feedback to improve classroom learning.

How should teachers go about doing that? The starting point is to realise that effective feedback requires a teacher to carefully lay the foundations, both by making sure that the initial teaching is as good as it can be, and by making sure that teaching is designed with feedback in mind. 

In other words, we need to anticipate the inevitable - that our teaching may not work well for all students - and make sure that our teaching generates evidence that we can actually use to help our students, rather than just telling us that our teaching wasn’t successful, and that we had better do it again, but better. 

We then need to provide the right kind of feedback - focusing on how the learner can move their learning forward - at the right time. 

Sometimes, especially with learners that lack confidence, it may be important to provide feedback rapidly, to provide reassurance that the learner is “on the right track”. However, over time, as students develop confidence, we can provide feedback that develops the student’s ability to manage their own learning - what psychologists call self-regulated learning. In other words, good feedback works towards its own redundancy.

However, no matter how good the feedback is, we need to ensure that the feedback is used. As Richard Stiggins reminds us, in classrooms the most important decisions are not taken by teachers but by learners, and that is another reason why much feedback research falls short. 

Researchers worry about the issues raised at the beginning of the article. Should feedback be immediate or delayed? Should it be specific or generic? Positive or critical? Such issues matter, of course, but they are far less important than the reactions of the recipient, and that is why the relationships between teachers and students are so important. 

Teachers need to know their students; know when to push and know when to back off. And students need to trust their teachers. They need to believe that their teachers have their best interests at heart, and that their teachers know what they are talking about. Where teachers create a culture in their classrooms where students want to - and know they can - improve, then feedback will be welcome, because feedback can help to direct the improvement.

However, perhaps the most important aspect of the EEF’s guidance report is that it clearly addresses what I think is the most important issue in the improvement of education, and that is opportunity cost. Every hour that teachers spend giving their students feedback is an hour they don’t have to spend on something else that might have an even bigger impact on student learning. 

One final point. Given that the research on feedback has so few clear conclusions, the sceptical reader may be concerned that there is no reason to conclude that these recommendations are likely to be effective in practice. 

It is true that we have little evidence on the effectiveness of particular kinds of feedback practices in school, but we do know that attention to these processes improves learning. 

One particularly important example is an independent evaluation of the Embedding Formative Assessment programme commissioned by the EEF and carried out by the National Institute for Economic and Social Research. 

In this cluster-randomised trial, involving 140 secondary schools, researchers found that giving schools resources to help teachers develop their practice of feedback and other aspects of formative assessment resulted in two months more progress for students (as measured by their GCSE grades), at a cost of £1.20 per student per year.

As I often comment, research can never tell teachers what to do. Classrooms are just too complex for that ever to be possible. But research can tell teachers where their efforts might be most fruitfully directed, and right now there does not appear to be any more cost-effective way to improve achievement than helping teachers to make their feedback more effective.

Dylan Wiliam is emeritus professor of educational assessment at the UCL Institute of Education

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared