- Home
- Teaching & Learning
- General
- Teachers should be wary of taking reading age as read
Teachers should be wary of taking reading age as read
I am 35 years old. No wait, maybe I’m 14 years and three months old. Or am I really 15? When calculating my reading age, it appears to be difficult to get a conclusive answer. Different tests tell me different things, sometimes with more variation than is comfortable. And what does “reading age” even mean?
Reading ages, of course, are a lot more complicated than actual ages. For starters, there are a huge number of test options available to determine at what age a person is reading, from simple free tests that involve getting a student to read out a list of words of increasing difficulty - the number they can read correlating with a reading age - to (paid for) tests that calculate reading ages, among other metrics, by measuring various “reading data points”.
That last part might sound impressive, but digging into those data points can mean grappling with terms such as “standard deviations” and “confidence intervals”, along with swathes of graphs (data heaven or data hell, depending on which way you lean).
That’s just the start of the complexity. So how can teachers make sense of it all? And how useful are reading ages in reality?
What we need is a reading expert. Step forward Dr Jessie Ricketts, director of the Language and Reading Acquisition research laboratory at Royal Holloway University of London, who studies reading development in childhood and adolescence.
Ricketts is quick to point out that the complexity of reading ages is only one of the problems that teachers are likely to encounter when referring to them: there are plenty of other issues with the information that reading ages provide, she says.
“Using reading ages (or other age equivalents) is highly problematic because it rests on the idea that there’s a clear expectation for how well a child should be reading at any given age. While this is broadly true, it doesn’t boil down to one level of reading but rather to an expected range,” she explains.
To illustrate this, she gives the example of 100 nine-year-olds reading 30 words.
“All of them can read at least one word (the easiest is ‘bed’) and some can read all the words (the hardest is ‘chassis’),” she says.
“On average, they read 20, and most (around 68) read between 17 and 23. In this example, a score of 20 would give you a reading age
of 9, but that doesn’t acknowledge that it’s perfectly typical for nine-year-olds to get anything between 17 and 23.”
The fact that reading ages operate on a kind of sliding scale within an expected range makes it problematic to say things like “this student is three months behind” or “six months ahead”, Ricketts adds, because
things are simply not that clear cut.
“Imagine the same reading test was administered to five-year-olds in Year 1,” she says. “For this age group, most children can read at least one word, some can read as many as 25. The average is eight and most read between three and 13.
“Because word reading abilities are changing so rapidly at this age, there tends to be a particularly wide variation, so a child who is ‘six months behind’ will look very different at 5 compared with 9.”
Not only would it not be strictly accurate, says Ricketts, to suggest that a child is “six months behind” but sharing this information with a child or their parents can create stigma. It could potentially unfairly skew the view of the teacher, too.
The trouble with reading ages
Interestingly, the people behind some of the most popular reading tests agree that there needs to be caution around the use of their reading age metrics as a definitive measurement.
Crispin Chatterton is director of education at GL Assessment, which offers popular reading tests, including the New Group Reading Test (NGRT) and the York Assessment of Reading for Comprehension (YARC). Both of these tests generate reading ages as one of their metrics, yet Chatterton’s advice to schools is not to use these ages in isolation.
“When working with schools, we say, ‘Don’t use the reading age if you can avoid it,’” he says.
Margaret Allen, curriculum and education specialist at Renaissance, the company behind the popular Star Reading assessment, takes a similar stance. “I would always encourage people to look at much more than just the reading age, as the reading age in isolation does not paint the whole picture,” she explains.
So, while such tests do generate reading ages, these are not the metrics that their creators really want teachers to focus on.
Star Reading, for example, gives students a “scaled” score, calculated by the number of questions they got right, as well as how difficult they were. This ranges from 0-1,400.
“This scaled score is a much stronger metric - one reason being that it is very sensitive to movement,” says Allen.
For instance, a scaled score anywhere between 105 and 114 on Star Reading translates to a reading age of 6.6. So a pupil might be making progress - climbing from 105 to 110 to 113 - but their reading age wouldn’t reflect this. This, Allen notes, makes the reading age less useful for tracking progress and seeing whether an intervention is working.
That’s not to say, however, that age-based comparisons don’t come into these assessments. Most of the more sophisticated reading tests, such as Star Reading, NGRT, and YARC, provide multiple metrics including a standardised score (or age-standardised score), which allows you to compare a child’s results to other students of exactly the same age - but again, these metrics are more sensitive to movement than a reading age.
Many will also provide metrics such as percentile scores - for example, if a student is in the 85th percentile for their age group in the country, they’re performing better than 85 per cent of that population. (Ricketts encourages teachers to use percentiles as they are “intuitive, like a reading age” but avoid the issues she sets out above.)
Arguably, another advantage of a numerical scoring system is that telling a 15-year-old that they have a score of 283 may feel less loaded than telling them they read like an eight-year-old.
Basically, other metrics than a reading age can provide a more nuanced picture of who the child is as a reader - and this is a picture of much more use to a teacher, says Nicola Mansfield-Niemi, assistant head and reading lead at a primary school in Milton Keynes.
“Reading is such a complex subject that having an age or level isn’t of any interest to me. I need to know the exact skills a child does or doesn’t have,” says Mansfield-Niemi.
So, if these tools provide the type of nuanced feedback that teachers actually want, why bother including reading ages at all? Why offer the temptation to simplify the assessment, with potentially damaging consequences?
Allen and Chatterton both explain that their companies use them partly because people expect them and also because they’re handy for an at-a-glance sense of someone’s reading ability.
“A scaled score can be more difficult for people to comprehend. What does 672 mean? It’s better than 653, but what does 653 mean? If you’re looking for a gut reaction to where this child is operating, you can use reading age. But we encourage people to look at the bigger picture,” says Allen.
A useful analogy here may be as follows: you can work out your general direction of travel from looking at the position of the
sun (in this case, the reading age). But, in order to really understand your trajectory, you need to whip out a compass (other reading-assessment metrics).
But what should that “compass” consist of and when should it be used? Turns out that it depends on the test.
The way forward
In the case of assessments such as Star Reading, NGRT and YARC, the skills being tested - and when they should be acquired - have been based on, or mapped against, the learning objectives of the national curriculum.
For example, the NGRT assesses skills such as phonic knowledge, inference skills, grammatical knowledge, and ability to deal with figurative and idiomatic language, depending on what age the test is targeted at.
Another popular tool - the Lexile Framework For Reading - uses a different “yardstick” for its assessments.
“We built an algorithm that analyses text looking at hundreds of different variables, such as difficulty of vocabulary (eg, abstract vs concreteness of words; the frequency with which that word
appears in our language), the complexity of syntactic structure and the structure of the book,” explains Malbert Smith, chief executive and co-founder of MetaMetrics, the company behind Lexile measures.
“This generates a Lexile level for the text.”
Is it possible to say one way is better than another when it comes to testing reading? Not really, suggests Ricketts, particularly when you are talking about how well pupils understand what they have read. “How you measure a child’s reading ability in terms of comprehension is not obvious because it’s so multifaceted,” she says.
Instead, she suggests it is better that any test is seen within the context of what is actually being assessed and that judgements be caveated based on that.
And it’s important to remember that no single assessment gives a complete picture, says Chatterton. “Any reading assessment
is only a tool in a teacher’s hand,” he says.
“If a teacher has their students doing a test, they will know that Ethan actually drifted off halfway through because there’s stuff going on at home that means he’s not getting enough sleep. No test can tell you
the context around that child. That’s down to the teacher’s expertise.”
So, let’s assume you’ve chosen a reading assessment that you think will suit your class. They’ve completed the test, you’ve got their results, including those “at-a-glance” reading ages…now what?
One of the attractions of reading ages is the theory that if you know a child’s level, you can match them up with books that have the right degree of challenge for them to progress with their reading. In addition, it can be seen as a way of pitching classroom materials at the right level.
“In the past, we have shared students’ reading ages with all the teachers, so even if they’re teaching history or geography, they’ve got an idea of students’ reading ability,” says David Bunker, an English teacher and literacy lead at a secondary school near Bristol.
But if books can have reading ages, too, how are those calculated?
Florentyna Martin, children’s buyer at Waterstones, explains that book publishers give a recommended age range for children’s books, which the buyers sense check before categorising the books in store.
“A lot of this will be based on curriculum levels and teacher guidance on reading ages. This can be impacted by complexity of
words and sentence structures, and length of book,” she says.
Structured reading schemes, which are used in many primary schools, are another obvious example of matching students to books. However, these tend to give books “levels” or “bands” rather than reading ages (although there’s often an approximate age equivalent).
Schools and publishers have different systems for categorising books, but some will use tools such as the Lexile Analyzer or ATOS (a tool from Renaissance that works in a similar way) to assess text complexity.
Smith argues that Lexile levels - which have been calculated for all kinds of text - can be helpful in many ways.
“If a student is struggling with reading but is interested in becoming, say, a nurse, I have seen a lot of school counsellors say, ‘Here’s the material you’re going to have to read to be a nurse or to get your driver’s licence. It’s a Lexile level 1400 and you’re at 700, so let’s work on getting your level up,’” he says.
Teachers can also run their materials through tools such as the Lexile Analyzer or ATOS Analyzer to sense check their complexity. Or, get this: for a rough gauge, you can check the “readability statistics”
on any Microsoft Word doc, which gives a reading ease score and a (US) grade equivalent. Who knew?
Content is key
Of course, judging the reading age of a text isn’t just down to its complexity, as shown by the unexpected results algorithms can throw up. “In an online tool we use to measure book levels, The Road by Cormac McCarthy comes out at a lower level than some Diary of a Wimpy Kid books,” notes Bunker. The Road, let’s remember, is a post-apocalyptic novel that features cannibalism.
Clearly, the content of a book also needs to be considered. Martin notes that this is part of a publisher’s and a buyer’s considerations.
“For example, we look at character ages and if the book is aspirational,” she says. So a 13-year-old character may sit in the 9-12 range “if we believe the character is suitable for students to look up to”.
However, judging the age-appropriateness of content isn’t an exact science, according to Sally Dring, a librarian at a grammar school in North Yorkshire.
“Every student is different,” she says. “That’s where librarians come in. It’s knowing the stock, knowing the pupils, knowing what’s right for an individual.”
Mansfield-Niemi agrees that staff expertise is key when it comes to helping children to pick their next read - and that reading ability is only one factor that will influence this choice. “Teachers not only know their children’s ability but also what their children want to read about. A lot of the desire to read comes from what the children are interested in,” she says.
However, finding the right book can be especially tricky when a student’s reading ability is way out of line with the books dubbed appropriate for their age.
“There’s been a history of giving, say, a 14-year-old who’s a poor reader books designed for four-year-olds,” says Ricketts.
This, unsurprisingly, is not the best way to encourage a reluctant teenage reader to read. But the picture is improving, suggests
Martin, with some book publishers now actively developing “titles that feature teen and young adult topics written in clear, approachable writing”.
The key message here, then, is that reading ages alone should not be changing what you do in the classroom. Well-contextualised reading assessments, used as part of a holistic picture of a child, can be useful but, similarly, that should not limit you either. Arguably, there are times when it’s good to let everything go - reading ages and all.
For Dring, her school’s library is broadly a reading-age-free zone. “I don’t really like to put an age range on a book,” she says. “We have junior fiction and senior fiction sections. However, if a Year 7 student comes with a book from senior fiction and I don’t think it has inappropriate content, I would let them take it,” she says.
Similarly, Dring has no qualms about an older student choosing a simple book. “Librarians get mad when parents send students back and tell them to choose a ‘proper book’. We want them to be reading for pleasure, not just set texts. And what’s a ‘proper’ book anyway?”
Mansfield-Niemi agrees. “Reading isn’t just being able to decode. It’s not just a skill, it can be an escape,” she says. And there’s no age limit on that.
Jessica Powell is a freelance journalist
This article originally appeared in the 8 January 2021 issue under the headline “Don’t take reading age as read”
You need a Tes subscription to read this article
Subscribe now to read this article and get other subscriber-only content:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters
Already a subscriber? Log in
You need a subscription to read this article
Subscribe now to read this article and get other subscriber-only content, including:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters
topics in this article