A young man stands on the beach, calling for his dog: "Tippet! Tippet! Come on, Tippet." The stick it was fetching floats alone in the water. Under the surface, legs flail. The infamous two-note theme of "Jaws" begins. The music quickens, bringing with it an increased sense of foreboding, as the view approaches the underside of an inflatable raft, atop which a young boy paddles with his hands and feet. It comes closer and closer, focusing on the boy's leg. Suddenly out of the water again, we see a great dark gray mass with a fin break the surface and the boy flails then disappears, his blood spurting and staining the water.
It's a well-known scene in a well-known movie, but perhaps the most well-known part is the music: that distinctive two-note "dun-dun" that signals the approaching shark, the approaching doom. It's so ingrained in our minds, it's nearly impossible to imagine "Jaws" without it.
And yet that's how millions of people experience the movie. In closed captioning, those infamous two notes are reduced to "♪♪."
"Captioning provides access to audiovisual content for people who are deaf and hard-of-hearing," said Sean Zdenek, an associate professor of technical communication and rhetoric at Texas Tech University. "People have complained for many years about poor-quality captioning and the lack of Internet captions on some content."
Zdenek studies captioning as a text to be interpreted – a resource of complex meanings and effects. The perspective is new in the humanities when it comes to understanding captioning.
"Typically, captioning has been treated as an add-on, afterthought, technical problem or legal requirement," he said. "It's been overly simplified or ignored altogether. My research – and my book 'Reading Sounds' in particular – attempts to do for captioning what researchers in the humanities have done for other texts such as speeches, TV shows, movies, etc.: to make sense of captioning as a significant resource of meaning and creativity. I want to show how captions create meanings, effects and experiences that are specific to captions and the mode of writing."
Captioning is a personal matter for Zdenek, whose 19-year-old son was born with profound hearing loss in both ears.
"Long before I began to write about captioning, it was just a part of our home life, albeit a very important part that provided crucial access to information for our son," he said. "My first important lesson related to captioning was about access and how everyone needs access to content regardless of hearing ability."
So he began studying how to make captions more accessible to all audience members.
"After watching closed captioning at home for many years, I took note of some really interesting things about how captions function to shape the meaning of the program," Zdenek said. "We talk a lot about equal access for all, but accessing content through reading is not the same as accessing it through listening, and doing both at the same time, as hearing and hard-of-hearing viewers do, is also different than accessing content through either one alone."
Zdenek's specialty is in non-speech sounds.
"I'm particularly interested in non-speech sounds that are unique and can't be captioned easily or can be captioned in multiple ways, such as the droning sound that the Hypnotoad makes on 'Futurama,'" he explains. "The Hypnotoad sound is actually made by playing a turbine engine sound backwards, but the meaning of the engine sound only becomes clear in context. When the sound is made by an animated toad character, it can't be captioned as '(TURBINE ENGINE SOUND)' because that doesn't make sense in context. It needs to be captioned in terms of the animated toad character that makes that sound.
"The captioner's agency and creativity are on full display with unusual or unique non-speech sounds."
Why is a captioner's creativity so important?
"Captioners don't simply copy down sounds in written form; they produce meaning in the act of interpreting the soundscape," he said. "While captioners have access to scripts, cast lists and other production notes, these documents don't tell them how to choose which sounds are significant and how to caption them."
Speech sounds are usually significant, Zdenek said, and as such they should be captioned verbatim, but things can get messy quickly. For example, if people are talking faintly or indistinctly in the background, should those faint speech sounds be captioned verbatim? What if two or more people are speaking at the same time? What if people are speaking too quickly and their speech needs to be edited to meet reading speed guidelines?"
For non-speech sounds, captioners have almost total control over the interpretation of the soundscape, Zdenek said. Multiple layers of sound fill up the soundtrack: foreground speech, background speech, paralinguistic sounds such as yelling, instrumental music, lyrics and sound effects such as explosions. All of these sounds can't be captioned, nor should they, in his opinion.
"Something happens to sound when it is captioned," Zdenek said. "It comes forward – I say that captions 'equalize' sounds – and may intrude on the reader's attention. Repeatedly captioning the (DOG BARKING) in the background may lead readers to assume the dog is more significant than it really is. Maybe the dog's barking is just part of the stock soundscape, but when captioned, the dog sound becomes prominent and can lead to misunderstanding."
One part of Zdenek's non-speech sounds research focuses on sonic allusions: well-known sounds that reappear in other contexts.
"I analyze a number of allusions to famous sounds from the past such as the five-note motif in 'Close Encounters of the Third Kind,' which occurs in a number of later movies and TV shows, such as 'Supernatural.' In 'Close Encounters,' a small snippet from the 'Jaws' theme can be heard in one of the last scenes of the movie," Zdenek said. "'Close Encounters' is an old example, but the thing about some non-speech sounds is that they originated in the past, such as song titles and lyrics. Captioners need to recognize these sonic allusions and caption them appropriately. It might surprise you that many sonic allusions are missed by the captioner."
Of course, deaf and hard-of-hearing viewers are not the only ones who can benefit from captioning. In his book, Zdenek references many times when captions can help hearing viewers: children learning to read; adults or children learning a second language; a late-night viewer who doesn't want to wake a sleeping partner or child; college students reviewing a recorded lecture; and more.
"Having access to both sound and writing can help hearing viewers make sense of what they're hearing," he explained. "This may sound strange, but hearing people don't always know what they're hearing, or they may think they know but are wrong. Music lyrics are a great example, because hearing people are famous for misinterpreting lyrics; we think we know what words are being sung and can be hilariously wrong. When captions print lyrics on the screen, those words become more easily understood.
"Other examples include names of people or unusual and made-up nouns. The 'Harry Potter' movies are full of examples. When hearing people can read the words being said like 'expelliarmus,' they are more quickly and efficiently understood. This is one of the effects of writing – it provides more efficient access to information than listening alone."
While captioning is helpful to many people, Zdenek said it can be difficult to do because it's a highly interpretative practice rather than an objective science.
"Captions have the potential to create a new text and a different experience of the movie or TV show for viewers," he said. "The soundscape must be interpreted and channeled into an accessible form of writing for time-based reading. I like to refer to the '3 Cs,' which I think is a helpful way to point to a different view of captioning, one not predicated on simple transcription or copying: captioning is complex, contextual and involves creative solutions to sometimes difficult problems."