- Home
- ‘Cumbersome performance tables need a rethink’
‘Cumbersome performance tables need a rethink’
The post-16 performance tables were published last week without much fanfare. They no longer seem to be the stuff of national debate or broadsheet supplements, probably because they have got so complicated. Performance tables are an important part of our accountability system and they were established to help inform students, parents and other stakeholders and to be used alongside wider contextual information. But the measures need to be reviewed if they are to fulfil their original purpose.
There is a large and rich data set on student and institutional performance, and it is right that it should be widely available. Good education policymaking can only benefit from openness and transparency. However, the sheer quantity of data can be a barrier to public understanding. The rapid growth in the number of measures also seems to run counter to Ofsted’s direction of travel and its concern that the emphasis on these can be at the expense of teaching and learning.
Performance tables ‘inaccessible’
Performance tables are supposed to be clear and accessible and to provide a reliable and comprehensive picture of the performance of the education system. Despite the mass of useful and interesting data they provide, our current post-16 performance tables fall short in several ways.
The performance tables have become a complex web of big data as more measures have been added over the years. The summary printout for a single college or sixth form runs to 11 pages; hardly a user-friendly checklist. There are now at least 90 measures to report for every 16-18 provider. Twenty-seven of these are described as headline measures grouped under 10 main headings, with 36 additional measures and 27 cohort measures. Some of these, such as those relating to “facilitating” A levels and AAB grades A level are of questionable value. Others, such as the “progress measure” for English and maths, are opaque and very distant from what they are measuring.
This means there is a real danger of not seeing the wood for the trees. The information is not easy for a non-specialist readership to interpret and this can create confusion. If the data is presented selectively, there is also the risk of oversimplistic interpretations or half-truths gaining currency.
Learners missing from the tables
The performance tables do not include the achievements of all 16- to 18-year-old students. Those on level 1 and entry-level courses are simply invisible and, at level 3, the reform of applied general qualifications and the decision to exclude the “unreformed” qualifications from the performance tables means that most students on this type of qualification are also not included. This is a major omission which means that the tables totally discount the achievements of over 40,000 students on well-established advanced courses which are funded by government and lead to good progression, including to higher education and employment.
Too much emphasis on institutional performance
The way the data is presented tends to focus attention on the performance of individual institutions, regardless of their size. It is harder to see the relative contribution of larger colleges or those with inclusive intakes when making comparisons. For instance, a small, selective sixth form may have higher average point scores than its larger, more inclusive neighbour, but the larger provider may actually have more high-achieving students and be making a greater contribution to the performance of the local system.
This means that performance tables can focus our attention on the performance of “outlier” institutions rather than helping us to see important evidence about the performance of the system as a whole. They also make it hard to see gaps in provision or lack of course choice in an area.
Towards better tables
Despite all these shortcomings, the data in the tables can be used to help us better understand institutional and system performance. They support local and national benchmarking and allow institutions to track changes over time.
We do need to measure educational performance, but we need to be sure that what we are measuring represents what we value. The data collected should be shared widely, but not obsessed over. Data can inform judgements about the effectiveness of the complex process of education, but it only tells part of the story. In terms of Ofsted’s new inspection framework, data helps to inform “curriculum impact”.
Our current performance tables contain much interesting data, but they have grown cumbersome and are ripe for review. The Association of Colleges has been calling for changes to the measures to make them more contextual and easier to administer and we are keen to support any process which helps to streamline them. Once we’ve settled on the right measures, we need to resist the temptation to keep changing them; stable indicators allow us to track the impact of policy and get a longer-term view of the system.
England’s performance tables are an impressive display of what is possible in data collection and sharing on a large scale, but they have yet to meet the key tests of coherence, inclusiveness and simplicity. Her Majesty’s Chief Inspector, Amanda Spielman, has said she wants to “actively discourage unnecessary data collection” and that is an aspiration we can surely all agree with.
David Corke is director of education and skills policy at the Association of Colleges, and Eddie Playfair is its senior policy manager
Keep reading for just £1 per month
You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters