Leadership - Before you act, test the evidence
How many education debates on Twitter or fevered TES Connect forum discussions turn on a participant triumphantly unveiling a piece of “evidence” and behaving as if it answered everything? And how many school leaders roll out an initiative almost on the strength of one bit of “evidence” alone?
In both cases, seemingly almost all of them. This is often followed by everyone else wilting in the face of the “facts” and the debate fizzling out. This is disappointing since that evidence may in reality be completely worthless.
Evidence comes in many shapes and sizes. It can be quantitative (numbers, percentages, charts and significance levels) or qualitative, where the reader has to be aware that there is an interpretative and subjective element in the analysis. Ultimately, research answers the questions the researcher wishes to find an answer to and sometimes those questions are a “fit” with practice and sometimes they are not.
So what do school leaders and teachers need to look for in evaluating research?
Beware of generalisations
Increasingly, we see policy initiatives that are drawn from surveys where the sample sizes are in the thousands. Often it is presumed that the data must be representative because of the size of the sample. But this is not necessarily the case. Such surveys may be biased towards a particular socio-demographic group. As a result, the findings will reflect only what has happened to those participants and not the experiences of the general population. Indeed, any attempt to extrapolate the findings to the wider school community will be flawed.
Similarly, participants in self-report surveys often respond from personal interest, so without corroborating and moderating evidence from teachers and parents their experiences may not be representative.
Many such surveys also require an element of retrospection from participants, which can cause problems in establishing credibility. Studies of bullying used to ask students to report incidents that had occurred “in the past term” or the past 30 or seven days. But just how well do we remember events of the past 24 hours, let alone those of the past week, month or year?
Retrospective studies have been the subject of much debate in terms of the accuracy of memory. Some argue that we embellish negative memories, others that we begin to forget the detail. In my research, I have found that recollections of past events tend to remain stable over time, but that does not mean that they have not been embellished or that only key points are stored in long-term memory.
The gold standard of research is purported to be the randomised controlled trial (RCT). Here, potential participants are assessed for eligibility (a particular demographic feature or, in the case of education, attainment profile) and are randomly allocated to either an experimental group (where an intervention will be introduced) or a control (where the intervention will be withheld). At a given point, both groups will be followed up or tested and the data analysed to determine whether the intervention has had an impact. In some cases, the studies are “blind”, so that the investigators, participants and analysts do not know which group experienced the intervention.
Although RCTs eliminate selection bias, they can present some ethical dilemmas. For example, in an RCT introducing an intervention to improve maths attainment, at what point should a control group also receive the intervention if the signs are that it is having a positive effect on test scores? Some researchers argue that adaptive trials, where efficacy is assessed early, are helpful so that a control group can also benefit from the intervention, albeit after a short delay.
Context and nuance
Context and nuance
Finally, qualitative studies provide much-needed clarity. Small, well-structured, interview-based or ethnographic studies can provide a wealth of information about context and nuances that is often missing from bare percentages or significance levels. Here, through the analysis of transcripts, the reader is given an opportunity to understand the experiences of participants or the observations of the researcher and also to question the interpretation of data. Although representativeness is not the goal of qualitative research, saturation (where the same issues reappear across interviews) provides an element of surety that the researcher has explored the issue in sufficient depth to identify issues that may be transferable.
So, the next time you read a piece of research thrown at you as justification for an initiative or to end an argument, ask yourself the following questions:
- l What question is this research seeking to answer?
- l Who participated?
- l How was the sample identified and is it representative?
- l Is the data current?
- l Were participants randomly allocated to groups?
- l What are participants telling me?
- l Do I agree with the researcher’s interpretation?
- l How can I use the results in my own practice or can it be used in the practice of my staff?
It is all too easy to accept “evidence” as gospel when, in reality, it can be far from it.
Ian Rivers is professor of human development and head of the School of Sport and Education at Brunel University, London. He is also a visiting professor in the Faculty of Health, Social Care and Education at Anglia Ruskin University and in the School of Psychological Sciences and Health at the University of Strathclyde
What else?
A research director puts his view on gathering evidence for interventions.
bit.lyDissentingVoices
From brain scan to lesson plan: TES Connect investigates public funding for education research.
Keep reading for just £1 per month
You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters