Keeping up with authentic speech

DLA subtitles closed captions.png

The topic of subtitles for ELT learners is still being wrestled with. Learners have different requirements of such a tool, dependent on their ability. Here, Elena Deleyto La Cruz summarises how subtitles influence learners' abilities to comprehend and interact with authentic video.

Although research on the benefits of L2 subtitles for language learners is still in its infancy, most studies point to clear benefits both for vocabulary acquisition and listening comprehension [1]. Additionally, as more and more language learners are already highly skilled in both media literacy in general and combined audio and visual inputs, subtitles do not interfere with their existing listening comprehension abilities.

Generation Z have high media literacy skills, let’s make the most of them!

Of course, subtitles are not as effective for all target ages and proficiency levels. Intermediate learners are able to get the most out of combined inputs, whereas research is still unclear about the benefits for beginners, which is highly dependent on teacher guidance and carefully designed activities to guide their focus.

At DLA we provide closed captions for all our videos, although we recommend limited use at beginner levels. Our goal is to produce accurately timed verbatim subtitles with the following maximum speeds as a guideline:

DLA subtitle speeds.png

Of course, careful choice of authentic sections and controlled pacing of VO narration ensures appropriate subtitle speed but a lot can be done by carefully thought out sentence splitting and use of pauses.

These are our guidelines, what do you think? Do you find using subtitles for beginners beneficial? Or do you think students need to develop their reading skills to a certain level first?

We will be attending IH Barcelona at the end of this week, so meet us there or get in touch by email or Twitter!

Update: Elena's also written a piece for IATEFL's blog interrogating the use of captions in video for ELT. You can read it here.

[1] See Maribel Montero et al. 2013, and Mila Vulchanova et al. 2015 for examples