Why your SEND intervention isn’t working

A new research review and toolkit explores the increasing attainment gap for pupils with SEND and why evidence-informed practice isn’t making it to the classroom
16th October 2024, 6:00am
Square pegs round hole

Share

Why your SEND intervention isn’t working

https://www.tes.com/magazine/teaching-learning/specialist-sector/why-your-send-intervention-isnt-working

With more eyes than ever before on special educational needs and disabilities (SEND) provision in mainstream schools, headteachers, principals and trust leaders will be looking closely at the targeted interventions claiming to improve outcomes for those who need additional support. What they will likely find is that the ones they are using may not be those that are actually proven to have an impact.

That’s according to one of the key findings that have come out of the MetaSENse project, a huge new exploration of SEND interventions, funded by the Nuffield Foundation.

Professor Jo Van Herwegen led the team - bringing together researchers from Birkbeck University of London, UCL Institute of Education and University College London, as well as experts in child development, neuroscience and education - to explore the effectiveness of interventions in reading, writing, mathematics, science and general attainment.

The first phase included a systematic review and meta-analysis of 467 global studies on the impact of targeted interventions for pupils with SEND aged 4-25.

This was followed by a series of in-depth interviews with 33 education professionals, exploring the currently used targeted intervention practices, their use of evidence and the possible barriers to implementing effective practices in schools.

Finally, researchers created an online toolkit that allows users to search and find the best-evidenced approaches for particular educational outcomes and SEND groups.

Below, Professor Van Herwegen talks through the research with Zofia Niemtus.

What started the process of creating the report and toolkit?

I was at a conference and heard the statistics about how many children in a typical class will have autism, attention deficit hyperactivity disorder, dyslexia, dyscalculia and so on, and how that means that in every mainstream classroom in the UK, there are roughly six pupils with additional needs.

We know that targeted interventions work for these pupils, but teachers have questions about which interventions work best.

There was a lot of research out there about specific interventions or for specific groups of SEND, but there wasn’t a review that put everything together to say, ”This is everything we know about interventions related to SEND”. So that was the starting point.

You found that most of the research studies were taking place in primary schools. Does that suggest there are fewer SEND interventions happening in secondary schools?

From the interviews, we didn’t find that there was less happening in secondary schools, so that isn’t necessarily the case, although it makes sense that there could be more happening in primary schools - thinking about the messaging around the importance of early intervention.

But we can certainly say that most of the research about interventions happens in primary schools, possibly because they are more flexible for research projects, whereas secondary schools, in the mainstream especially, are very structured, so maybe it’s just more difficult to do research.

One of the major challenges you highlight is with implementation, what is going wrong?

It’s about fidelity. In the studies, we didn’t see a lot of descriptions about how well interventions had been implemented. But there are usually certain rules to follow, so how do we know that they follow these rules?

We also talked to teachers and they told us that they don’t always follow the rules and will tailor an intervention to the needs of their own pupils.

However, that brings a kind of watering down effect, and this is why it’s so important that teachers and leaders think carefully about reviewing the targeted interventions that they’re using.

How much are schools doing that, did you find?

From the interviews, we found that some schools are really good at it. They will have a date in the diary and review together as a team, looking at the data and what has been observed.

They will be asking questions: is it necessary that we keep these targeted interventions? Are there particular gaps in our provision where we need to go and look for more interventions?

But in most schools, that is not happening.

Square peg round hole


There is no systematic review or annual review, it’s really haphazard. Someone’s gone to a conference or a talk or heard about something that someone says is amazing and has spoken to the Sendco and said the school should do this. That means most schools have a huge list of interventions that they’re using, but very little reflection on how interventions are selected for particular pupils. It’s mainly trial and error.

You’ve also said that schools are overlooking their own valuable data around what works, what would you like them to be doing differently?

A lot of schools collect data in terms of where pupils start and then how much they improve, but they don’t look at the data to evaluate what targeted interventions are working or have stopped working.

And there might be a number of reasons why they have stopped working: because the pupil profiles have changed, or they’re not following it accurately so the intervention might have been watered down, or maybe it’s an intervention that we thought was good 20 years ago but nowadays we know it’s not so good.

Some schools collect this data but then just look at it for individuals. They’ll say: “Jake improved, that’s good.”

But they don’t say: “Let’s look at all the children and all of the data we’ve got for all of our targeted interventions to see which should stay and which should be replaced”.

There isn’t that systematic approach.

One of your findings from the meta-analysis is that interventions in maths tend to have more of an impact than in English. What is happening there?

The evidence shows that when you intervene in maths, you get bigger effects - so larger improvements - than if you intervene in reading. And that makes sense because if you look at mathematical development, it’s multiple components: you get a counting component, an arithmetic component, an algebra component and so on.

And very often, these interventions are very focused on particular mathematical components, intervening in a very narrow aspect of it.

Reading interventions are often more broader and more encompassing, and will include vocabulary as well. If you’re intervening in a large area, over a short term, how much can you change someone’s reading abilities?

The targeted interventions do work for both, but you will find a much bigger effect for mathematics.

You also found that group size and the person who delivered the intervention didn’t have much of an effect?

Indeed, the intervention effects didn’t vary according to whether they were delivered in a small group, one-to-one or who implemented them. We found that really interesting.

It was picked up by a colleague who works in Northern Ireland, where all pupils with SEND who need targeted interventions only get this provided one-to-one, so their waiting lists are huge.

Square peg round hole


Now, of course, because this is a meta-analysis, it’s messy, with everything on one heap. It doesn’t mean that all interventions would definitely work in a group setting, some might still require a one-to-one setting.

But what we find is that, on average, it doesn’t make a difference.

I think that’s a positive finding because it means that we can explore whether more of these interventions can be delivered in small groups or to a whole classroom. With some adaptation, some of these good practices could apply to a whole group of children.

The challenge of staff training each other in how to deliver interventions arises as a key issue - could you explain more?

It’s a common practice that one person will go for training and have to train others, so it becomes like Chinese whispers.

Very often these people haven’t implemented the intervention themselves for very long, they’ve just received the training. So the question is how much are we really implementing the interventions as they were intended and as they were evaluated?

If you’re in the third generation of an intervention and the original person who got the training has already left the school, so you can’t even ask them questions, are you still doing the targeted intervention anymore?

What do you think would help stop ineffective interventions from being kept in classrooms?

If teachers and school leaders received more training in terms of research evidence, it would give them more power in terms of making decisions about what targeted interventions to use with what kind of populations.

Some interventions and approaches that are out there are fun, they’re great in their design, but they’re not evidence-informed, even though they’re being sold as evidence-informed.

The research evidence is often not as strong as it may appear, so the company will have surveyed 20 teachers and they said it worked in their classroom and that’s it in terms of their evidence.

One of our interviewees told us that they were very interested in the project because their staff kept asking about a particular working memory training programme that’s been around since the early 2000s. It’s been evaluated by something like seven studies and it is quite clear now that this approach doesn’t improve pupils’ abilities whatsoever, but it is still very popular and well-known in schools.

Something that also came up in the interviews a lot was uncertainty about what is a trustworthy source. Some named the Education Endowment Foundation as a very trustworthy source, for example, but others would say because they found a resource on an online platform, or it was recommended in a local discussion group, it was trustworthy.

I think having a little bit more understanding of research will empower teachers and school leaders to make more informed decisions. And that’s what the database is all about, to help with those more informed decisions.

What is your hope that staff will do with the database?

Our main argument is that targeted interventions work and there is evidence out there. We are providing a link to the evidence with the database and toolkit, and we have a box explaining the theory of change for each intervention, to help teachers explore why things might work and who they might benefit.

We’re not advocating for any targeted interventions specifically, but rather relaying what the research is saying. And the database will never be finished, there are new studies coming out all the time, so it will keep being updated. It might be that for some interventions we now only have one study that shows it was ineffective, but in the future, there could be two or three more studies with different populations and it is found that it works for them. Nothing is set in stone, it is an iterative process.

I hope it will become a trusted resource, and help leaders and teachers to make evidence-informed decisions, think about which targeted interventions they use in their schools and why, and hopefully start thinking about when to review them and when to update them.

For the latest research, pedagogy and classroom advice, sign up for our weekly Teaching Essentials newsletter

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared