- Home
- Education research is great but never forget teaching is a complex art form
Education research is great but never forget teaching is a complex art form
Research, research, research. Everyone is talking about education research. The movement for “evidence-based practice” has become somewhat of a phenomenon in recent times, embraced by teachers, bloggers, education media, politicians and even the school’s inspectorate. If you want to get a teaching job in the next year or two, bandying around the terms “retrieval practice, metacognition and spaced learning” will be a bad start.
The increased accessibility and visibility of research is a good thing. ResearchEd, the organisation set up in 2013 by Tom Bennett and Hélène Galdin-O’Shea, deserves enormous credit for bringing research findings to teachers’ doors. An awareness of what academic researchers say in regard to different teaching practices is always going to be beneficial. I think this is absolutely fantastic.
Nevertheless, I do have some concerns. Firstly, when we raise educational research or “evidence-based practice” to the status of sacrosanct. Second, when individuals or institutions use it for their own ends and, thirdly, when it is used or applied in a way that restricts teacher development. Innovation and experimentation is key to a teacher discovering and building their teacher self.
In truth, educational research can only ever be what happened in a number of classrooms with particular students at particular times with particular teachers teaching them with particular levels of quality in particular countries in particular seasons testing particular ideas. The opportunities for divergence are endless. Celebrated academic Dylan Wiliam’s talk Why Teaching Will Never be a Research-based Profession perfectly captures the fact that even the best research studies can’t consider the context of each teacher or school in the trial. They presume a lot. He uses the Education Endowment Foundation teaching and learning toolkit as an example, a widely shared piece of research that ranks methods of intervention in schools by cost and effectiveness.
Teaching assistants, it said, had no impact and cost the most amount of money. This is misleading. Wiliam goes on to state how in some schools TAs are excluded from staff training sessions, how they don’t have time to sit down with respective classroom teachers to plan and how they are often limited by direction as to who they can and can’t work with. I respect these environmental factors myself after working with TAs throughout my career. I never found it easy and always thought I was either under using them or wasting them as a resource, mainly because I didn’t have the time before or after the lesson to sit down and plan with them.
Yet would a research study consider this contextual information or not? Wiliam shows “feedback” was the EEF’s top most effective teaching strategy, but uses it as a cautionary tale to state that without teachers knowing the exact types of feedback or how exactly to deliver it, they could easily be led down the garden path. Further to this, it worries me that the credentials and motivations of the institutions and businesses that fund particular research studies are often overlooked or not properly explored for bias. As Sian Ephgrave said aptly on Twitter: “If you’re going to talk about ‘evidence-based practice’ you also need to discuss how research has been framed and whose agenda it is promoting.”
The problem is, of course, that time is limited, and if particular research is being delivered by someone or something with an agenda, teachers simply won’t have the time or the knowledge to be able to effectively question or interrogate it. Do the media or politicians citing and promoting particular research know in detail how it was put together and the potential flaws in it? Do they know it but choose to ignore it?
The research movement has started to mould the shape of pedagogy in many schools. Based on a cluster of research that highlights “direct instruction” as having the highest impact on student attainment, many have adopted this as the principle of their teaching and learning offer. The regular use of quizzing to consolidate knowledge also scores highly in the educational research world, spawning buzzwords like “retrieval practice” and “spaced learning”. This has led to an overarching philosophy - “teach according to what the evidence says is the best way almost all of the time”. In practice, this has meant teachers being told to ignore less effective activities, for example more enquiry based or student-led activities, with the clear and firm reasoning that a mixture of direct instruction, questioning and quizzing will achieve the best results.
In an exam system that is asking students to learn and regurgitate more and more “facts” in subjects like history, this is probably a reasonable approach. However, the benefits of different activities can be subtle and discreet in my experience. They can be, in essence, non-measurable. For example, a discovery project based approach may aid student confidence in working collaboratively, it may inspire students to think more creatively or it might give them the opportunity to think in different ways. But a researcher would struggle to find evidence for this in observation situations or in test results. So would Ofsted. So in response, they’ve made clear they favour a progression model based on “knowledge”. Chief inspector Amanda Spielman said last week that “it appeared harder for schools to model progression in terms of skills. Leaders who said they had attempted to map student progress in developing skills…had no secure way of knowing whether students had acquired the defined skills”. The inspectorate, with its new-found obsession with all things curriculum, has realised they need something to measure schools on within the parameter. The simplicity of an inspector asking a child: “Do you know when the First World War started?” and them replying “1914” makes any assessment of curriculum impact as easy as a grandmother sweeping sugar off a cake.
So, it’s my argument that there is a risk that we are confining teaching to a set of routines or patterns or mechanised systems that puts teaching into a box.
There are components that need to be considered in any imposition of a “rule” based on educational research. First of all, you have the teacher. Every teacher has a different personality and professional identity. Due to natural character traits, some teachers might feel more comfortable teaching in a particular way, whether that be more from the front, the “sage on the stage”, or more as a facilitator, the “guide on the side”. There has been some suggestion that adopting practices that fall outside the realm of educational research could be “unprofessional”. If a more didactic and direct approach is “better” (which it may well be), then for it to be “better”, surely the teacher has to feel comfortable teaching in that way, and much of that feeling of comfort may come down to personality traits and preferences, something that shouldn’t and perhaps couldn’t be “trained out of them”. If teachers teach the way they feel works for them and their students in their context, this is surely better than blindly following “the research”. My concern is that in focusing on metadata analysis, we forget that teaching is one of the most complex art forms in existence. Prescribing pedagogy doesn’t allow teachers to develop their own style, to understand the role of pragmatic flexibility within the classroom and to experiment and innovate, which ironically is often the key ingredient in providing the basis for much educational research.
The second component is the learning. In my view, engagement is so important to learning. Behind the most advanced learning surely lies absolute engagement? Of course, “some” learning can occur in many given situations, but when engagement is peaked, the potential for memorable learning is increased substantially. The problem is, engagement is a proxy that is often separate to learning. An engaging “prop”, whether that be a video clip, an amusing anecdote or a costume can pique a student’s interest in what is to come but have minimal “impact on learning” in that given moment. A teacher who twists and turns a lesson, perhaps creating a brief “diversion” to re-establish peak engagement, is one who intrinsically understands classroom dynamics. The analogy might be the football team that has a patient build up before playing an incisive pass, learning isn’t just robotic motions but a human experience attached very much to “feeling” in the moment. Again, this is something education research will always struggle to appreciate due to its natural form.
In summary, I’m arguing for a much more pragmatic and critical approach to academic research as well as a much higher profile and status for experiential research - in other words, what a teacher says is working for them and their students in their classroom.
Finally, a message to those teachers who might be feeling “research inferior”. Thousands of the best teachers in the world have never read a word of educational research in their life. Don’t worry if you haven’t had time to read 50 papers - in the end, it’s you, your skill, your passion, your experience and your relationships that matter. Just do it.
Thomas Rogers is a teacher who runs rogershistory.com and tweets @RogersHistory
For more columns by Tom, view his back catalogue
Want to keep up with the latest education news and opinion? Follow Tes on Twitter and Instagram, and like Tes on Facebook
Keep reading for just £1 per month
You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters