Open and closed captions have been a broadcasting standard for a couple of decades. It was an experimented with in the early 1970s before becoming more common through the 1980s and onwards through a series of technological innovation and policy implementation at a governmental level in the United States. They have become normalised, much like seeing an interpreter translating the audio into sign language. Captioning was mostly used for pre-recorded content. It was much easier to transcribe and time-code with accuracy content which wasn’t live. For when it was live, like for the 1982 Academy Awards which was the first-time live captioning was achieved, courtroom reporters, or those with significant transcription experience, were hired. Nowadays, with technology as it is, AI can do the job. This is key because there are notable differences between how much more is broadcasted now and via a variety of platforms. It is still, though, as important as ever.
Accuracy and Reliability
As mentioned, humans used to transcribe closed captions for live content. They are good at their job. However, there is always a chance that something goes awry, either in terms of accuracy or speed. This where AI is coming in. It’s making an appearance in more fields than transcription: everything from medical scans and diagnosis to liking posts on Instagram. It’s performance is reliant on a human hand itself before it begins a task – via programming – and after – via quality control. AI and humans work together to achieve high-quality work.
Verbit’s live captioning service makes use of AI. It boasts a ninety-nine-percent accuracy, which will be a repeatable performance. AI learns more and more as information is fed into it. It’s important there’s accuracy for closed captions otherwise a speaker’s intended meaning can be marred and warped.
Accessibility
Captioning was developed early on by institutes and organisations which wanted to enable people who are d/Deaf or have hearing impairment. This still stands as ample justification for using live captioning during a broadcast. Closed captions detail what is being said, who is saying it, and non-speech sounds (like music or atmospheric sounds like knocks at the door), working under the assumption that the audience cannot hear the audio at all – which is where it differs to subtitles. This is essential tool for people to access content. However, there is a large and growing pool of users – of streaming services and social media – who utilise them.
Closed captions keep things clear. They ensure that everything is accounted for and that can benefit even those who do not have any difference in hearing ability. Users watch videos without sound on social media because they might be in a public place and the last-thing they want to do is be caught unaware by embarrassing audio. Also, it can aid comprehension. A speaker’s accent might make their speech harder to understand, the sound mixing might be poor, or the user’s environment might be noisy: there are ample reasons closed captions can be of use.
The expectations for closed captions haven’t necessarily changed in the decades since it was first used. All that’s changed is how often it is used and for what kinds of broadcast. All else is the same: it helps people communicate.