- Home
- Teaching & Learning
- Primary
- Why we have to put children first in education research
Why we have to put children first in education research
Earlier this month the Education Endowment Foundation published its long-awaited report on the effectiveness of two popular phonics programmes: Read Write Inc (RWI) and Fresh Start.
There was plenty of interest in the findings, not least because RWI is widely used across schools in England and is supported by 20 of the 34 Department for Education-funded English Hubs.
Unfortunately, those findings did not paint a very positive picture.
The EEF found that, on average, at the end of Year 2 the children following the RWI scheme were more likely to pass the phonics screening check and made one month’s additional progress than children learning phonics in other ways, as measured by the New Group Reading Test (NGRT). The programme had no effect on writing development.
The children who took part in Fresh Start, however, made, on average, two months less progress than the children they were compared against.
Read more:
We know that the research team faced several big challenges: many schools, despite being eligible, did not use Fresh Start; teachers withdrew children from the NGRT because they felt it was too hard; and many children did not score on the assessment (known as a floor effect), suggesting that perhaps the assessment was not sensitive enough.
These and many other issues mean that the trustworthiness of the evaluations is low - the EEF has awarded the RWI study a two-padlock security rating, and the Fresh Start study a three-padlock rating.
Much has already been written about all of this. But perhaps the real story here is one of competing agendas and how the research process failed to take them into consideration.
Problems with the research into phonics
When designing an evaluation, there are multiple conflicting agendas: the owners of the product want a positive outcome for their product and naturally believe that it does what they claim. The evaluators, meanwhile, are interested in ensuring that the study itself is methodologically sound.
For a school and its teachers, though, the main responsibility is to teach pupils to the best of their ability. They are held accountable for the attainment and welfare of the children. The very least a school would expect from being involved in a study is that no harm is done. And yet the Fresh Start trial was found to have a significant negative effect.
The programme leaders suggest that some of the challenges that the trial faced were the result of the implementation in schools lacking fidelity to the scheme and how it was intended to be delivered.
This may be so. RWI is a complicated programme, with many moving parts: full adoption of each and every aspect, as well as unwavering commitment to follow the programme from senior leadership downwards, is required.
But that, in itself, is an issue. In an effectiveness evaluation (as this was), a product should easily be adopted by a wide range of schools without ongoing challenges.
Furthermore, if schools and teachers are not involved in the study design from the outset, then it is easy to see how the priorities of others become more important than the priorities of the school - causing the children to get lost in the process.
At the same time, though, our response to a study like this can’t be to simply write it off and to carry on regardless, claiming the trial was flawed and is, therefore, invalid.
The way we learn from studies is as important as the way we are conducting them, and, in this case, there are several things schools can learn.
Firstly, leaders can use the study as a way of interrogating how they use this scheme and others like it, and even how they teach phonics in general, considering the impact within the context of their communities and their children.
They can ask whether such schemes are having the impact they hope for all children and check they have a full picture of that impact (bearing in mind that perhaps relying on the phonics screening check alone is not enough). They can then ask whether the scheme is value for money or whether they could achieve the same outcomes using alternative methods.
Leaders can also ask whether there are unintended negative outcomes from the programme, reflecting on the commitment to making sure that teaching leads to the best possible outcomes for children.
There is a lesson for policymakers here, too: remember that every data point in an evaluation like this one is an individual child. Every school involved took decisions with those children in mind. They were doing their job. Taking part in a research trial is a powerful experience, but it cannot come at the expense of the children.
A robust research process should start with the question. But in school we start with the children, and we must not lose sight of that.
Megan Dixon is a doctoral student and associate lecturer at Sheffield Hallam University
You need a Tes subscription to read this article
Subscribe now to read this article and get other subscriber-only content:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters
Already a subscriber? Log in
You need a subscription to read this article
Subscribe now to read this article and get other subscriber-only content, including:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters