A long time ago, a statistician was attempting to cross a river with his son.
The statistician wasn’t sure how deep the river was, so decided to measure it. Finding the depth to be variable, he settled for taking the average depth.
As this measurement was less than his son’s height, he told his son that it was safe to cross.
Unfortunately, the boy drowned.
This proverb, which seems to originate from India, demonstrates how misleading statistical averages can be. Rather than just taking the average, it is important to look at the spread of data, as well.
The problem with averages
Recently, I encountered another example of how important this was when I came across evidence being cited to support the practice of “fast reading” - which involves a class reading a book at a faster pace than usual, without stopping to analyse the text.
Now, I’m mainly a specialist in mathematics education, so I’ll gladly leave the final say to language experts. However, I do know how to read research papers and interpret statistical results, and it is here that I noticed something interesting.
Support for fast reading is based on a paper by Jo Westbrook and colleagues from 2018.
The study was a “mixed methods study in which 20 English teachers in the South of England changed their current practice to read two whole challenging novels at a faster pace than usual in 12 weeks with their average and poorer readers ages 12-13”.
Reading proficiency was measured in different ways before and after the 12 weeks.
The first thing to notice about the study is that all of the students did fast reading, so there was no comparison with a different approach. The only differential was that 10 of the teachers received additional training in teaching comprehension.
The authors found that “students in both groups made 8.5 months’ mean progress on standardised tests of reading comprehension, but the poorer readers made a surprising 16 months’ progress”.
The progress indication is based on the reading ages that were obtained from reading tests carried out at the start of the study, which also gave a score for “story comprehension”.
Why we need to look beyond headline figures
So, what can we take from the results of this study?
On the surface, they seem positive. The problem is that if poorer readers made 16 months’ progress, but the average progress was just 8.5 months, that must mean some students made far less progress.
Indeed, when we look more closely at the data tables, we see that the 12 weeks of intervention led to only around two months’ progress for what the researchers deem “average + readers”.
A two-month improvement for 12 weeks of work sounds less impressive than the headlines.
Based on this study, I think you could say that when applying this approach across a whole class, you need to be mindful that it might work well for some students, but less so for others.
There are other elements of the research we need to consider, too: like, how the schools were chosen, the extent of attrition and the recommendations for the duration of fast reading, based on relatively few interviews.
To be clear, I’m not trying to undermine the quality of the research. This was an interesting and comprehensive mixed-method study.
However, as so often with research, you really have to look under the surface to appreciate the findings. And just like the statistician who wanted to cross the river with his son, it is best to consider the whole picture before taking the plunge.
Christian Bokhove is professor in mathematics education at the University of Southampton and a specialist in research methodologies