87 research outputs found

    Sub-Sync: automatic synchronization of subtitles in the broadcasting of true live programs in spanish

    Get PDF
    Individuals With Sensory Impairment (Hearing Or Visual) Encounter Serious Communication Barriers Within Society And The World Around Them. These Barriers Hinder The Communication Process And Make Access To Information An Obstacle They Must Overcome On A Daily Basis. In This Context, One Of The Most Common Complaints Made By The Television (Tv) Users With Sensory Impairment Is The Lack Of Synchronism Between Audio And Subtitles In Some Types Of Programs. In Addition, Synchronization Remains One Of The Most Significant Factors In Audience Perception Of Quality In Live-Originated Tv Subtitles For The Deaf And Hard Of Hearing. This Paper Introduces The Sub-Sync Framework Intended For Use In Automatic Synchronization Of Audio-Visual Contents And Subtitles, Taking Advantage Of Current Well-Known Techniques Used In Symbol Sequences Alignment. In This Particular Case, These Symbol Sequences Are The Subtitles Produced By The Broadcaster Subtitling System And The Word Flow Generated By An Automatic Speech Recognizing The Procedure. The Goal Of Sub-Sync Is To Address The Lack Of Synchronism That Occurs In The Subtitles When Produced During The Broadcast Of Live Tv Programs Or Other Programs That Have Some Improvised Parts. Furthermore, It Also Aims To Resolve The Problematic Interphase Of Synchronized And Unsynchronized Parts Of Mixed Type Programs. In Addition, The Framework Is Able To Synchronize The Subtitles Even When They Do Not Correspond Literally To The Original Audio And/Or The Audio Cannot Be Completely Transcribed By An Automatic Process. Sub-Sync Has Been Successfully Tested In Different Live Broadcasts, Including Mixed Programs, In Which The Synchronized Parts (Recorded, Scripted) Are Interspersed With Desynchronized (Improvised) Ones

    Comparative analysis between a respeaking captioning system and a captioning system without human intervention

    Get PDF
    People living with deafness or hearing impairment have limited access to information broadcast live on television. Live closed captioning is a currently active area of study; to our knowledge, there is no system developed thus far that produces high-quality captioning results without using scripts or human interaction. This paper presents a comparative analysis of the quality of captions generated for four Spanish news programs by two captioning systems: a semiautomatic system based on respeaking (system currently used by a Spanish TV station) and an automatic system without human interaction proposed and developed by the authors. The analysis is conducted by measuring and comparing the accuracy, latency and speed of the captions generated by both captioning systems. The captions generated by the system presented higher quality considering the accuracy in terms of Word Error Rate (WER between 3.76 and 7.29%) and latency of the captions (approximately 4 s) at an acceptable speed to access the information. We contribute a first study focused on the development and analysis of an automatic captioning system without human intervention with promising quality results. These results reinforce the importance of continuing to study these automatic systems

    Voice-over : practice, research and future prospects

    Get PDF

    Barrier-free communication: methods and products : proceedings of the 1st Swiss conference on barrier-free communication

    Get PDF
    corecore