What makes good evidence? The EEF explains

As education research filters down into classroom practice, there are four key questions every leader and teacher needs to ask when looking at a new ‘evidence-based’ approach, says Becky Francis
28th September 2022, 12:00pm
Magnifying glass

Share

What makes good evidence? The EEF explains

https://www.tes.com/magazine/teaching-learning/general/education-research-teachers-what-makes-good-evidence-eef-explains

The phrase “evidence-informed” is becoming increasingly prevalent in education circles. This is undoubtedly a good thing, as we know that robust evidence helps teachers and school leaders to deepen their understanding of the practices that are most effective for pupils’ learning.

Research also helps us to identify, modify or put a stop to approaches that are shown to have no impact, or even a negative effect, on children’s progress.

However, it’s crucial that teachers, school leaders and policymakers maintain a healthy scepticism of all that claims to be “evidence-informed”. More often than not, the research base shows that new initiatives prove to be no more effective than what schools were already doing.

So, how can we interrogate the evidence underlying a given approach? And what are the important things to look for in order to establish whether the claims being made around its effectiveness are accurate?

Education research: what makes a good piece of evidence?

“Evidence” is a broad term, and means different things to different people. Therefore, identifying credible sources of evidence that meet certain quality standards is essential. There are several factors that we can look for when judging the relevance and quality of evidence:

Methodology: do the methods used support the claims made?

Research methodologies should be designed in a way that answers the questions the researchers set out with.

For example, qualitative research and case studies can tell us much about how things are experienced and processed, but should not be used to judge the effectiveness of an approach.

In contrast, experimental research such as randomised controlled trials (RCTs) or quasi-experimental designs may assess the impact and effectiveness of certain teaching practices or programmes. But by themselves, they tell us less about the “how or why”.

It is for these reasons that the Education Endowment Foundation prioritises experimental studies (usually RCTs) but routinely commissions accompanying qualitative impact process evaluations (IPEs) to delve into the potential explanations for the outcomes of an approach.

In addition, single experimental study results may sometimes be outliers, so looking at reviews that include many studies can give a more balanced view of the evidence base.

High-quality reviews use transparent processes for collating and rating studies in a comprehensive way, often using objective systems to avoid cherry-picking evidence.

Systematic reviews aim to provide a comprehensive and transparent overview of the evidence.

Study quality: how reliable is the study?

The quality of experimental research that purports to tell us about the impact of a particular approach can be assessed in a number of ways.

The extent to which the person who conducted the research has an interest in the results can influence findings, and, therefore, independent evaluations (where the researcher is unbiased) should be seen as more reliable than those that are not.

Checking that the methods and research questions were pre-specified (were written down prior to the research being conducted) is another useful way of assessing rigour.

Finally, only studies with large sample sizes should generalise findings throughout the wider population, as big samples provide “statistical power” to help estimate impact and the certainty of the result.   

At the EEF, independent peer reviewers rate study quality using padlock icons (see padlock guidance here) - these indicate security. The methodology behind the assignment of padlocks takes into account important considerations on study quality, including design, balance between treatment and control groups, attrition (participant dropout), statistical power, and other threats to validity.

 

Results: what effects did the study find?

When reading the results of a piece of research, it’s important to consider whether the conclusions made are supported by the data collected.

If the data collected was qualitative (eg, interview data or teacher observations), it would not be appropriate for the researchers to claim this as evidence that an approach is more effective than another, or that it had an effect on learning. Although the study may tell us much about respondents’ perceptions and experiences of a particular approach.

If the data collected was quantitative (data numerically aggregated, from surveys, exam results and so on), it is important to check that sample sizes are adequate, that statistics are not “spun” (key trends ignored in order to “tell a story”), and that conclusions do not unjustifiably infer causality.  

If the data collected was quantitative evidence for outcome measurement in an experimental study, it’s important to check that the direction of the impact measurement (or “effect size”) was positive, if such claims are made; that confidence intervals are communicated; and that the magnitude and uncertainty of the results are fairly communicated in the conclusions.

Context: how relevant is the evidence to your context?

While robust research evidence can tell us what has worked in the past in certain contexts, consideration should be given to how similar the environment in which the research was conducted is to our own classrooms, schools and systems.

The EEF’s Teaching and Learning Toolkit does not make definitive claims as to what will work everywhere, and communicates the phases, subjects and countries underpinning the evidence for each approach to help users assess relevance to their contexts.

You might consider:

  • Where did the research take place, and how similar is that to my own context?
  • Was the outcome used to measure impact the same outcome that I’m looking to improve and affect?

Applying professional judgement

These criteria can help us to establish standards of evidence quality when evaluating classroom practices and programmes. It is nevertheless essential to exercise professional judgement when assessing the rigour with which research evidence has been produced, and its relevance to your context. Thinking about the feasibility of implementing evidenced approaches in your environment is one way of doing this. It may be helpful to ask yourself:

  • Will the approach need to be adapted to fit my local context?
  • How much organisational capacity might it require to embed the practice? Can we afford to make this commitment?
  • Are teachers and others likely to want to adopt the practice?

Educational research is not a silver bullet, but when combined with professional judgement and the appropriate processes for bringing about change in schools - as laid out in the EEF’s Implementation Guidance - high-quality evidence can act as an important catalyst for expertise, confidence and autonomy within the teaching profession.

For research evidence to have this positive and empowering effect, though, it’s essential that teachers, school leaders and policymakers question, critique and discriminate between the many “evidence-informed” claims that we encounter.

Professor Becky Francis is chief executive of the Education Endowment Foundation

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared