The number of ‘good’ and ‘outstanding’ schools has certainly jumped. But can the government really claim credit?

It’s probably true that the number of pupils in ‘good’ or ‘outstanding’ schools has rocketed, but there’s no evidence that government policy is responsible. Quite the opposite, argues one veteran education journalist
16th May 2017, 5:12pm

Share

The number of ‘good’ and ‘outstanding’ schools has certainly jumped. But can the government really claim credit?

https://www.tes.com/magazine/archive/number-good-and-outstanding-schools-has-certainly-jumped-can-government-really-claim
Thumbnail

It is the Conservatives’ central claim about their education record. It was the first statement about public services Theresa May made in an interview with the BBC’s Andrew Marr last month.

But does what the prime minister told Marr - that “it’s because of the decisions that [the] government took that we now see...1.8 million more children in ‘good’ or ‘outstanding’ schools” - bear scrutiny?

I have been wondering this for a while, having pondered how a set of statistics generated for one purpose is now being appropriated for another - with remarkably little challenge - by politicians.

The proportion of schools adjudged “good” or “outstanding” has indeed climbed dramatically since 2010.

Back then, 67 per cent of primary schools and 64 per cent of secondaries were rated in Ofsted’s top two categories. By 2016, the figures had jumped to 90 per cent and 78 per cent respectively.

The figure of 1.8 million more pupils in “good” or “outstanding” schools, within a total pupil population of 8 million last year, sounds plausible.

But then things get more complicated.

Measures of satisfaction

Last year, I attended a conference where Ofsted’s director of schools, Sean Harford, gave a run-down of recent inspection data. The graphs he showed were very interesting.

The percentage of schools judged “good” or “outstanding”, in both the primary and secondary sectors, stayed broadly flat for the first two years of the coalition government. Then, in 2013, the graphs spiked - especially in the primary sector, where the proportion of schools in Ofsted’s top two categories at their last inspection jumped from 69 to 78 per cent. In secondaries, the figure rose five percentage points, from 66 to 71 per cent.

Since then the figures have continued to rise, but more steadily.

So what happened? Well, there is a maxim in the assessment world: “If you want to measure change, don’t change the measure.” But Ofsted’s measure of success did indeed change in 2012-13.

That was when a new inspection framework was introduced which replaced the old “satisfactory” grade with “requires improvement”.

Did this, rather than any government policy, cause this seemingly quirky jump in the statistics? It seems likely. Sir Michael Wilshaw, the then-chief inspector, said at the time that school leaders had taken the new grade as a signal that they would “not put up with second best,” driving improvements from staff.

Another anecdote I’ve heard is that inspectors had proved more reluctant to hand out the more damning “requires improvement” grade than the previous, seemingly less critical, “satisfactory” verdict. Either of these explanations would suggest a rise in the figures that had nothing to do with government policy.

Struggling schools receive more of the spotlight

The kind of schools that Ofsted inspections focus on also needs closer scrutiny. Understandably, given the inspectorate has limited resources, it has directed most attention at schools that have had relatively poor inspection grades in the past, with the aim of spurring improvements. Far more schools with previously poor grades get inspected.

For example, of those schools inspected in 2015-16, only a quarter went into the inspection with a previous rating of “good” or “outstanding”, with two-thirds rated “RI” and 8 per cent “inadequate”.

This means that while the proportion of previously “good” or “outstanding” schools is unlikely to change much every year, there is far greater scope for previously “RI” schools to nudge up into the “good” category, because more of these former struggling schools are being inspected.

To put it another way, only 524 schools had the chance to lose their Ofsted rating of “good” or better in inspections in the last academic year, while 1,529 institutions had the potential to gain it. No wonder the stock of “good” or “outstanding” schools keeps rising.

By contrast, some “good” or “outstanding” schools are going many years without being inspected. As of August last year, some 1,473 schools - all rated “good” or better - had not been subject to any full Ofsted inspection since summer 2010 or before.

So while struggling schools get inspections that give them the chance to improve, some previously successful institutions are not subjected to re-inspections where the ratings could fall. (Also, previously struggling maintained schools can convert to academy status and see their negative Ofsted ratings taken out of the statistics completely.)

Improvements preceded policies

Moving beyond statistical quirks, Ms May’s claim that government policy has driven Ofsted improvements must, of course, be treated sceptically. Obviously, professionals would point out that their hard work, rather than policy initiatives - which must seem to have, at best, served as energy-sapping distractions to many of them - should be seen as the principal reason for any improvements.

Beyond that, it is worth noting that the spike in inspection grades in 2012-13 came before most of the Conservatives’ big policy changes, such as the national curriculum in primaries and assessment changes in secondary, came into force.

Meanwhile, as Henry Stewart, of the Local Schools Network, has argued, Ofsted’s stats have improved much more sharply in the primary sector, where there are fewer academies, than in secondaries.

The broader point is that Ofsted’s systems were not designed as a mechanism for holding politicians to account. There must be questions, also, over their reliability as a gauge of the quality of the system. This is the case given that few statisticians would view basing pronouncements on overall standards on sampling as good social science, given that they are clearly unrepresentative of the population as a whole.

But it seems to be what we get from politicians, especially during an election campaign. Why do we put up with it?

Warwick Mansell is a freelance education journalist and author of Education by Numbers. You can read his back catalogue here. He tweets @warwickmansell

Want to keep up with the latest education news and opinion? Follow Tes on Twitter and like Tes on Facebook

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared