The EEF view: what are the implications for education research?
An effect size is a simple way of presenting the difference between two groups of pupils, usually one that has received a particular teaching and learning approach and another that hasn’t. When something has had an impact on learning, the effect can be measured and expressed as a number. Effect sizes are standardised measures that translate impacts from different outcomes into a numerical value that can allow for impacts to be compared with each other or combined in a meta-analysis.
In Education Endowment Foundation (EEF) trials, we calculate an effect size for any interventions that we test, which captures the impact that the approach being tested has had on learning, compared with a group of pupils that hasn’t received that approach. We convert effect sizes to months of progress, to help schools interpret the potential impact that a particular approach could have.
While effect sizes are crucial in allowing us to compare and combine impacts between different interventions - it’s important to consider the nuance that sits behind a standardised effect. The most common misconception is that the bigger the effect size, the better. But, as with most things in education research, it isn’t that simple.
Education research with low-quality design, smaller samples and simple outcome measures will more likely display bigger effect sizes. Converting to effect size can sometimes hide what outcome was measured.
For example, classic sources of large effect sizes are studies that measure outcomes really close to an intervention. If you introduce a vocabulary intervention and then calculate an effect size based on the knowledge of the specific words that are taught, you will likely see a big effect size. If, however, you test whether one structured small-group intervention is likely to improve GCSE English literacy results, compared with another, you will likely see a small effect size.