Ofsted is engaged in a fool’s errand, which could end in disaster for the organisation and for the art and craft of inspection.
Its efforts to try to apply social-research methods to investigating the quality of education reveal the severe limitations of those methods, and the questionable assumptions underlying its conception of education.
Despite what Ofsted’s director of research claims, the quality of education cannot be “measured”, only appraised; it cannot be measured reliably or validly, only assessed on the basis of expertise born of extensive past experience and of partly tacit knowledge, which cannot be captured in its complexity by Ofsted’s so-called indicators.
Misconceived
The slew of research reports summarised in Research commentary: assessing the quality of education (29 June 2019) are well meant and, by the standards of applied social science, reasonably well documented. But they are misconceived. The quality of education cannot be captured or characterised in such terms. It is as simple and as complex as that.
Because the quality of education involves value judgements, and because those inspecting have differing experiences, it is no wonder that levels of reliability as reported in the studies are not high. Ofsted does its best to talk up its findings: “The picture for work scrutiny showed good but not substantial levels of reliability,” but the reports reveal considerable differences between the HMI involved which, though inevitable given the nature of the enterprise, are being seized on by critics as fatal flaws.
Ofsted claims to be developing subject-specific guidance in all subjects to improve reliability, but such guidance and its accompanying training can only constitute a very small part of that extensive experience which inspectors need to bring to bear on their judgements. It is bound to be very limited in its effects. Here, as elsewhere, Ofsted offers more than it can deliver.
Scarcely mind shattering
The research reports make much of validity: “The most important thing to get right is that we are looking at the right things” - scarcely a mind-shattering observation. The reports do not so much establish validity as proclaim it on the basis of literature on lesson-observation research and conversations with inspectors.
As with reliability, the findings are talked up: “The findings are positive, especially in schools.” Unsurprisingly to anyone, “observers clearly distinguished behaviour from teaching and curriculum,” but, more surprising and worryingly, “observers do not clearly distinguish” between teaching quality and curriculum quality.
What price validity then? Will “what is probably the most extensive programme of inspector training we have ever done” result in validity that is high, measurable and incontestable? Many critics would answer: “Doubtful”. I would answer: “Impossible”.
A dangerous game
Through publishing these reports, Ofsted is rendering itself vulnerable from philosophers critiquing the assumptions and concepts underlying this purported scientific approach, from social scientists critiquing its methodology and its conclusions and from many inspectors and teachers who view teaching as an art, not a form of applied social science.
In putting educational research on the same footing as inspectorial expertise, Ofsted is playing a dangerous game. The inevitable limitations of social science in dealing with the issue of quality will come back to haunt it and will undermine the essence of inspection - best viewed as a tentative, rigorous, professional but subjective appraisal by those who through wide experience and collective judgement have demonstrated what can best be described as “educational connoisseurship”.
Rather than strengthening the art and craft of inspection, the kind of scientism embodied in these reports may end up destroying its credibility, along with Ofsted itself.
Colin Richards is former senior HMI, described by one of his critics as an “old-fashioned” inspector