Closed-Captioning, Subtitling and Video Transcription (Part 1 of 2)

From: Wikipedia, Closed-Captioning

The “CC in a TV” symbol was created at WGBH.

The “Slashed ear” symbol is the International Symbol for Deafness used by TVNZ and other New Zealand broadcasters. The symbol was used on road signs to identify TTY access.

Closed captioning (CC) and subtitling are both processes of displaying text on a television, video screen, or other visual display to provide additional or interpretive information. Both are typically used as a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including descriptions of non-speech elements. Other uses have been to provide a textual alternative language
translation of a presentation’s primary audio language that is usually burned-in (or “open”) to the video and unselectable. HTML5 defines subtitles as a “transcription or translation of the dialogue … when sound is available but not understood” by the viewer (for example, dialogue in a foreign language) and captions as a “transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information … when sound is unavailable or not clearly audible” (for example, when audio is muted or the viewer is deaf or hard of hearing).[1]

Closed-Captioning Terminology

The term “closed” (versus “open”) indicates that the captions are not visible until activated by the viewer, usually via the remote control or menu option. On the other hand, “open”, “burned-in”, “baked on”, or “hard-coded” captions are visible to all viewers.
Most of the world does not distinguish captions from subtitles.[citation needed]
In music or sound effects using words or symbols. Also, the term closed caption has come to be used to also refer to the North American EIA-608 encoding that is used with NTSC-compatible video. The Uniteed Kingdom, Ireland, and most other countries do not distinguish between subtitles and closed captions and use “subtitles” as the general term. The equivalent of “captioning” is usually referred to as “subtitles for the hard of hearing”. Their presence is referenced on screen by notation which says “Subtitles”, or previously “Subtitles 888” or just “888” (the latter two are in reference to the conventional teletext channel for captions), which is why the term subtitle is also used to refer to the Ceefax-based Teletext encoding that is used with PAL-compatible video. The term subtitle has been replaced with caption in a number of PAL markets that still use Teletext – such as Australia and New Zealand – that purchase large amounts of imported US material, with much of that video having had the US CC logo already superimposed over the start of it. In New Zealand, broadcasters superimpose an ear logo with a line through it that represents subtitles for the hard of hearing, even though they are currently referred to as captions. In the UK, modern digital television services have subtitles for the majority of programs, so it is no longer necessary to highlight which have captioning and which do not.
In music or sound effects using words or symbols. Also, the term closed caption has come to be used to also refer to the North American EIA-608 encoding that is used with NTSC-compatible video. The Uniteed Kingdom, Ireland, and most other countries do not distinguish between subtitles and closed captions and use “subtitles” as the general term. The equivalent of “captioning” is usually referred to as “subtitles for the hard of hearing”. Their presence is referenced on screen by notation which says “Subtitles”, or previously “Subtitles 888” or just “888” (the latter two are in reference to the conventional teletext channel for captions), which is why the term subtitle is also used to refer to the Ceefax-based Teletext encoding that is used with PAL-compatible video. The term subtitle has been replaced with caption in a number of PAL markets that still use Teletext – such as Australia and New Zealand – that purchase large amounts of imported US material, with much of that video having had the US CC logo already superimposed over the start of it. In New Zealand, broadcasters superimpose an ear logo with a line through it that represents subtitles for the hard of hearing, even though they are currently referred to as captions. In the UK, modern digital television services have subtitles for the majority of programs, so it is no longer necessary to highlight which have captioning and which do not.
Remote control handsets for TVs, DVDs, and similar devices in most European markets often use “SUB” or “SUBTITLE” on the button used to control the display of subtitles/captions.

Closed Captioning History

Open-Captioning

Regular open-captioned broadcasts began on PBS‘s The French Chef in 1972.[2] WGBH began open captioning of the programs Zoom, ABC World News Tonight, and Once Upon a Classic shortly thereafter.

Technical Development of Closed-Captioning

Closed captioning was first demonstrated at the First National Conference on Television for the Hearing Impaired in Nashville, Tennessee in 1971.[2] A second demonstration of closed captioning was held at Gallaudet College (now Gallaudet University) on February 15, 1972, where ABC and the National Bureau of Standards demonstrated closed captions embedded within a normal broadcast of The Mod Squad.
The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA.[2]
As a result of these tests, the FCC in 1976 set aside line 21 for the transmission of closed captions. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs.
Real-time captioning, a process for captioning live broadcasts, was developed by the National Captioning Institute in 1982.[2] In real-time captioning, court reporters trained to write at speeds of over 225 words per minute give viewers instantaneous access to live news, sports, and entertainment. As a result, the viewer sees the captions within two to three seconds of the words being spoken, explaining spelling and grammatical errors and garbled characters.
Major US producers of captions are WGBH-TV, VITAC, CaptionMax and the National Captioning Institute. In the UK and Australasia, Red Bee Media, itfc, and Independent Media Support are the major vendors.

Full-Scale Closed-Captioning

The National Captioning Institute was created in 1979 in order to get the cooperation of the commercial television networks.[3]
The first use of regularly scheduled closed captioning on American television occurred on March 16, 1980.[4] Sears had developed and sold the Telecaption adapter, a decoding unit that could be connected to a standard television set. The first programs seen with captioning were a Disney’s Wonderful World presentation of the film Son of Flubber on NBC, an ABC Sunday Night Movie airing of Semi-Tough, and Masterpiece Theatre on PBS.[5]

Legislative Development in The US

Until the passage of the Television Decoder Circuitry Act of 1990, television captioning was performed by a set-top box manufactured by Sanyo Electric and marketed by the National Captioning Institute (NCI). (At that time a set-top decoder cost about as much as a TV set itself, approximately $200.) Through discussions with the manufacturer it was established that the appropriate circuitry integrated into the television set would be less expensive than the stand-alone box, and Ronald May, then a Sanyo employee, provided the expert witness testimony on behalf of Sanyo and Gallaudet University in support of the passage of the bill. On January 23, 1991, the Television Decoder Circuitry Act of 1990 was passed by Congress.[2] This Act gave the Federal Communications Commission (FCC) power to enact rules on the implementation of Closed Captioning. This Act required all analog television receivers with screens of at least 13 inches or greater, either sold or manufactured, to have the ability to display closed captioning by July 1, 1993.[6] Also, in 1990, the Americans with Disabilities Act (ADA) was passed to ensure equal opportunity for persons with disabilities.[3] The ADA prohibits discrimination against persons with disabilities in public accommodations or commercial facilities. Title III of the ADA requires that public facilities—such as hospitals, bars, shopping centers and museums (but not movie theaters)—provide access to verbal information on televisions, films or slide shows. The Telecommunications Act of 1996 expanded on the Decoder Circuity Act to place the same requirements on digital television receivers by July 1, 2002.[7] All TV programming distributors in the U.S. are required to provide closed captions for Spanish-language video programming as of January 1, 2010.[8] A bill, H.R. 3101, the Twenty-First Century Communications and Video Accessibility Act of 2010, was passed by the United States House of Representatives in July 2010.[9] A similar bill, S. 3304, with the same name, was passed by the United States Senate on August 5, 2010, by the House of Representatives on September 28, 2010, and was signed by President Barack Obama on October 8, 2010. The Act requires, in part, for ATSC-decoding set-top box remotes to have a button to turn on or off the closed captioning in the output signal. It also requires broadcasters to provide captioning for television programs redistributed on the Internet.[10] On February 20, 2014, the FCC unanimously approved the implementation of quality standards for closed captioning,[11] addressing accuracy, timing, completeness, and placement. This is the first time the FCC has addressed quality issues in captions.

Legislative Development in Australia

The government of Australia provided seed funding in 1981 for the establishment of the Australian Caption Centre (ACC) and the purchase of equipment. Captioning by the ACC commenced in 1982 and a further grant from the Australian government enabled the ACC to achieve and maintain financial self-sufficiency. The ACC, now known as Media Access Australia, sold its commercial captioning division to Red Bee Media in December 2005. Red Bee Media continues to provide captioning services in Australia today.[12][13][14]

Funding Development in New Zealand

In 1981, TVNZ held a telethon to raise funds for Teletext-encoding equipment used for the creation and editing of text-based broadcast services for the deaf. The service came into use in 1984 with caption creation and importing paid for as part of the public broadcasting fee until the creation of the NZ on Air taxpayer fund, which is used to provide captioning for NZ On Air content, TVNZ news shows and conversion of EIA-608 US captions to the preferred EBU STL format for only TVNZ 1, TV 2 and TV 3 with archived captions available to FOUR and select Sky programming. During the second half of 2012, TV3 and FOUR began providing non-Teletext DVB image-based captions on their HD service and used the same format on the satellite service, which has since caused major timing issues in relation to server load and the loss of captions from most SD DVB-S receivers, such as the ones Sky Television provides their customers. As of April 2, 2013, only the Teletext page 801 caption service will remain in use with the informational Teletext non-caption content being discontinued.

Closed-Captioning Application

Closed captions were created for deaf or hard of hearing individuals to assist in comprehension. They can also be used as a tool by those learning to read, learning to speak a non-native language, or in an environment where the audio is difficult to hear or is intentionally muted. Captions can also be used by viewers who simply wish to read a transcript along with the program audio. In the United States, the National Captioning Institute noted that English as a foreign or second language (ESL) learners were the largest group buying decoders in the late 1980s and early 1990s before built-in decoders became a standard feature of US television sets. This suggested that the largest audience of closed captioning was people whose native language was not English. In the United Kingdom, of 7.5 million people using TV subtitles (closed captioning), 6 million have no hearing impairment.[15]
Closed captions are also used in public environments, such as bars and restaurants, where patrons may not be able to hear over the background noise, or where multiple televisions are displaying different programs. In addition, online videos may be treated through digital processing of their audio content by various robotic algorithms (robots). Multiple chains of errors are the result. When a video is truly and accurately transcribed, then the closed-captioning publication serves a useful purpose, and the content is available for search engines to index and make available to users on the internet.[16][17][18] Some television sets can be set to automatically turn captioning on when the volume is muted.

Closed-Captioning for Television and Video

For live programs, spoken words comprising the television program’s soundtrack are transcribed by a human operator (a speech-to-text reporter) using stenotype or stenomask
type of machines, whose phonetic output is instantly translated into
text by a computer and displayed on the screen. This technique was
developed in the 1970s as an initiative of the BBC‘s Ceefax teletext service.[19]
In collaboration with the BBC, a university student took on the
research project of writing the first phonetics-to-text conversion
program for this purpose. Sometimes, the captions of live broadcasts,
like news bulletins, sports events, live entertainment shows, and other
live shows, fall behind by a few seconds. This delay is because the
machine does not know what the person is going to say next, so after the
person on the show says the sentence, the captions appear.[20]
Automatic computer speech recognition now works well when trained to
recognize a single voice, and so since 2003, the BBC does live
subtitling by having someone re-speak what is being broadcast. Live
captioning is also a form of real-time text. Meanwhile, sport events on channels like ESPN are using court reporters, using a special (steno) keyboard and individually constructed “dictionaries.”
In some cases, the transcript is available beforehand, and captions
are simply displayed during the program after being edited. For programs
that have a mix of pre-prepared and live content, such as news bulletins, a combination of the above techniques is used.
For prerecorded programs, commercials, and home videos, audio is
transcribed and captions are prepared, positioned, and timed in advance.
For all types of NTSC programming, captions are “encoded” into line 21 of the vertical blanking interval – a part of the TV picture that sits just above the visible portion and is usually unseen. For ATSC (digital television)
programming, three streams are encoded in the video: two are backward
compatible “line 21” captions, and the third is a set of up to 63
additional caption streams encoded in EIA-708 format.[21]
Captioning is modulated and stored differently in PAL and SECAM 625 line 25 frame countries, where teletext is used rather than in EIA-608, but the methods of preparation and the line 21 field used are similar. For home Betamax and VHS
videotapes, a shift down of this line 21 field must be done due to the
greater number of VBI lines used in 625 line PAL countries, though only a
small minority of European PAL VHS machines support this (or any)
format for closed caption recording. Like all teletext fields, teletext
captions can’t be stored by a standard 625 line VHS recorder (due to the
lack of field shifting support); they are available on all professional
S-VHS
recordings due to all fields being recorded. Recorded Teletext caption
fields also suffer from a higher number of caption errors due to
increased number of bits and a low SNR,
especially on low-bandwidth VHS. This is why Teletext captions used to
be stored separately on floppy disk to the analogue master tape. DVDs
have their own system for subtitles and/or captions that is digitally
inserted in the data stream and encoded on playback in video field
lines.
For older televisions, a set-top box or other decoder is usually
required. In the US, since the passage of the Television Decoder
Circuitry Act, manufacturers of most television receivers sold have been
required to include closed captioning display capability.
High-definition TV sets, receivers, and tuner cards
are also covered, though the technical specifications are different
(high-definition display screens, as opposed to high-definition TVs, may
lack captioning). Canada has no similar law but receives the same sets
as the US in most cases.
During transmission, single byte errors can be replaced by a white
space which can appear at the beginning of the program. More byte errors
during EIA-608 transmission can affect the screen momentarily, by
defaulting to a real-time mode such as the “roll up” style, type random
letters on screen, and then revert to normal. Uncorrectable byte errors
within the teletext page header will cause whole captions to be dropped.
EIA-608, due to using only two characters per video frame, sends these
captions ahead of time storing them in a second buffer awaiting a
command to display them; Teletext sends these in real-time.
The use of capitalization varies among caption providers. Most
caption providers capitalize all words while others such as WGBH and
non-US providers prefer to use mixed-case letters.
There are two main styles of line 21 closed captioning:

  • Roll-up or scroll-up or paint-on or scrolling:
    Real-time words sent in paint-on or scrolling mode appear from left to right, up to one line at a time; when a line is filled in roll-up mode, the whole line scrolls up to make way for a new line, and the line on top is erased. The lines usually appear at the bottom of the screen, but can actually be placed on any of the 14 screen rows to avoid covering graphics or action. This method is used when captioning video in real-time such as for live events, where a sequential word-by-word captioning process is needed or a pre-made intermediary file isn’t available. This method is signaled on EIA-608 by a two-byte caption command or in Teletext by replacing rows for a roll-up effect and duplicating rows for a paint-on effect. This allows for real-time caption line editing.

A still frame showing simulated closed captioning in the pop-on style

  • Pop-on or pop-up or block: A caption appears on any of the 14 screen rows as a complete sentence, which can be followed by additional captions. This method is used when captions come from an intermediary file (such as the Scenarist or EBU STL file formats) for pre-taped television and film programming, commonly produced at captioning facilities. This method of captioning can be aided by digital scripts or voice recognition software, and if used for live events, would require a video delay to avoid a large delay in the captions’ appearance on-screen, which occurs with Teletext-encoded live subtitles.

Closed-Caption Formatting

TVNZ Access Services and Red Bee Media for BBC and Australia example:

I got the machine ready. 
      ENGINE STARTING       (speeding away) 

UK IMS for ITV and Sky example:

(man) I got the machine ready. (engine starting) 

US WGBH Access Services example:

MAN: I got the machine ready.       (engine starting) 

US National Captioning Institute example:

      I GOT THE MACHINE READY. 

US other provider example:

I GOT THE MACHINE READY.       [engine starting] 

US in-house real-time roll-up example:

>> Man: I GOT THE MACHINE READY. [engine starting] 

Non-US in-house real-time roll-up example:

  MAN: I got the machine ready.       (ENGINE STARTING) 

درباره Javad Hoseini:

نقش: مدیر
6 نوشته ی وی را ببینید

Leave a Reply

Your email address will not be published. Required fields are marked *

80 − 75 =

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,


تمام ایمیل ها و تماس های تلفنی سریعاً پاسخ داده می شوند ولی چنانچه از یک روش تماس پاسخ نگرفتید حتماً با روش دیگر تماس حاصل نمایید
Mobile: )+98( 9354167938 (Javad Hoseini) Tel: )+98-71( 36347903
info@irannopendar.com
طراحی سایت در آذر 87 توسط: ایران نوپندار
برای لود این صفحه 42 عملیات در 1.865 ثانیه انجام شد.