The software-defined ENCO enCaption4 platform enables broadcasters and content producers to quickly, easily and cost-effectively add closed or open captions to live and prerecorded content. Its latest enhancement helps customers overcome an industry-wide challenge in the captioning of live programming. While captions could be aligned with corresponding speech during post production of file-based content, the nature of live captioning has inherently precluded such precise synchronization. Speech-to-text processing of a word or phrase cannot begin until after it has been spoken, and taking the context of surrounding words into account for greater transcription accuracy adds to this latency.
enCaption4’s newest capability shatters this limitation, effectively synchronizing the live captions with the spoken words. Already highly regarded for minimizing the latency between speech and its resultant captions, enCaption4 can now delay the associated video and audio by a user-configurable duration to provide lip-sync-like alignment. Two to four seconds of video delay is generally sufficient to provide the desired temporal precision, but by setting a longer delay, customers can choose to expand the audio analysis window to further enhance enCaption4’s renowned speech-to-text accuracy.
The integrated video delay functionality is a key element of ENCO’s automated captioning patent, and can be applied to a wide array of enCaption4 output options. enCaption4 systems incorporating the optional, DoCaption-powered, internal closed caption encoder card automatically output SDI signals with synchronized captions embedded, while other enCaption4 units equipped with SDI outputs can deliver delayed video to external caption encoders. The video delay can also be used to align open captions that are overlaid atop web-destined and NDI output streams.
Here are all the specs.