On one hand, it’s a tool to aid accessibility; on the other, it can stand in the way of learning and comprehension. Jared Cooney Horvath explains how to use captions correctly
If you’ve ever had several excited students try to simultaneously shout out responses to a question, then you’re well aware that human beings can only meaningfully comprehend one person speaking at a time. The reason for this concerns a particular neurological bottleneck.
In order to understand oral speech, the brain relies on the Wernicke-Broca network: a small chain of cells processing the meaning of auditory words. Unfortunately, the brain only has one of these networks. This means we can only funnel one voice through this network at a time and comprehend only one speaker at a time: a neurological bottleneck.
Surprisingly, when we silently read, the Wernicke-Broca network activates to the same extent as when we listen to someone speak. This means our brain processes the silent reading voice in exactly the same manner it does a voice speaking out loud. Accordingly, just as human beings can’t listen to two people speaking simultaneously, neither can we read while listening to someone speak.
This is the basis for the oft-discussed “redundancy effect”, “cognitive load” and other theories that have long demonstrated learning and memory decrease when students are presented with simultaneous text and speech elements.
This issue is highly relevant to onscreen captioning, which many of you may have been using to make your online learning more accessible. When captions are present during a video narration, students tend to understand and remember less than students who watch the same video without captions. Even when captioning is identical to spoken narration, the bottleneck is activated.
That said, there are several circumstances when combined captions and narration will not clash and can improve learning.
The first concerns students learning a new language. For the bottleneck to activate, both reading and listening comprehension skills must be fluent. When students are new to a language and not fluent in both (or either), captions can help students make better sense of narration they might otherwise miss.
The second circumstance concerns degraded or hard-to-understand speech. In some documentaries and video lessons, the audio quality can be poor. This means viewers must expend a lot of cognitive energy simply deciphering the words being said, leaving little for deep comprehension or thought. In these instances, captions can ease the decoding of narration and boost learning.
The third circumstance concerns heavy accents. When a narrator or teacher has a heavy accent, this again forces viewers to expend a lot of cognitive energy deciphering speech. Again, in this case, captions can ease decoding and boost memory and transfer.
In the end, once we recognise the underlying mechanism driving many cognitive and learning theories, many seemingly discrepant academic theories work themselves out. Very few research studies are at odds - they simply tap into different aspects of the same basic mechanisms.
Jared Cooney Horvath is a neuroscientist, educator and author. To ask our resident learning scientist a question, please email: AskALearningScientist@gmail.com
This article originally appeared in the 12 June 2020 issue
You need a Tes subscription to read this article
Subscribe now to read this article and get other subscriber-only content: