John Hattie’s ranking of teaching strategies has influenced pedagogy across the world – but there are three important issues that teachers should bear in mind, says Jared Cooney Horvath
As you’re likely aware, Professor John Hattie is the creator of Visible Learning: an educational product that ranks teaching strategies according to the impact each has demonstrated in academic literature.
Of importance is the fact that Hattie does not conduct research himself, nor does he pool together data from previously published research (meta-analysis); he arrives at his conclusions by pooling together data that has already been pooled together by other researchers. In other words, Hattie combines many meta-analyses together in order to conduct a meta-synthesis.
Although meta-synthesis is a valid statistical measure, there are three issues to keep in mind when engaging with Visible Learning.
The first issue concerns repetition. Quite often the same research paper will appear in multiple different meta-analyses. This means that when Hattie pools together these analyses, quite often the same research paper will be analysed several times. In fact, in the original Visible Learning analysis for feedback, more than 118 of the included studies were duplicates.
Unfortunately, when the same paper is analysed multiple times in this way, this introduces statistical bias that can greatly skew final results.
The second issue concerns weighting. Not all data sets are created equal; where one meta-analysis may pool together four research studies, another may pool together 400 research studies.
Visible Learning gives equal weighting to every included meta-analysis. For instance, when determining the impact of prior achievement on learning, Visible Learning includes one analysis that combines data from six studies and another that combines data from 1,077; yet both have an identical impact on the final outcome.
This equal weighting greatly changes Hattie’s final rankings. In fact, if I select a single measure, say Visual-Perception Programs, ranked at 35, when the relevant analyses are weighted to reflect their included data, this teaching strategy moves up to rank 7.
The third issue concerns depth. Due to measurement and analysis constraints, most academic research defines learning as the short-term memorisation of discrete facts. Although memorisation is a wonderful place to start learning, most schools are interested in deeper levels of learning, including contextualisation and application.
Unfortunately, Visible Learning draws almost exclusively from academic research. In fact, a recent analysis from Gregory Donoghue suggests that a full 93 per cent of studies included in Visible Learning define learning as mere memorisation, with 74 per cent testing this memory over a span of less than 24 hours.
In the end, none of these issues are deal-breakers. Duplication is a common problem in all meta-syntheses (though this bias should be accounted for), equal weighting is an option some researchers employ (though this is rarely recommended), and short-term memorisation of facts is a valid definition of learning (though many teachers almost certainly demand more).
It is simply important to be aware of these considerations in order to more effectively engage with and comprehend Visible Learning and similar endeavours.
Jared Cooney Horvath is a neuroscientist, educator and author. To ask our resident learning scientist a question, please email: AskALearningScientist@gmail.com
Listen to John Hattie addressing some of the concerns raised about Visible Learning on the Tes Podagogy podcast at bit.ly/HattiePod
This article originally appeared in the 3 July 2020 issue under the headline “The problems with Visible Learning are clear to see”
You need a Tes subscription to read this article
Subscribe now to read this article and get other subscriber-only content: