3 research outputs found

    Investigating the collaborative process of subtitles creation and sharing for videos on the Web

    Get PDF
    In this paper we concentrate on the study of the collaborative practices of enthusiasts that create and share subtitles for third party videos. Based on preliminary results from interviews with some volunteers, we formalize the subtitles creation and sharing process using a business process management model and compare it with other collaborative and crowdsourcing models. We expect that our initial observations can bring a new understanding of the process and, thus, help in the design of next generation video enriching tools

    An automatic caption alignment mechanism for off-the-shelf speech recognition technologies

    No full text
    With a growing number of online videos, many producers feel the need to use video captions in order to expand content accessibility and face two main issues: production and alignment of the textual transcript. Both activities are expensive either for the high labor of human resources or for the employment of dedicated software. In this paper, we focus on caption alignment and we propose a novel, automatic, simple and low-cost mechanism that does not require human transcriptions or special dedicated software to align captions. Our mechanism uses a unique audio markup and intelligently introduces copies of it into the audio stream before giving it to an off-the-shelf automatic speech recognition (ASR) application; then it transforms the plain transcript produced by the ASR application into a timecoded transcript, which allows video players to know when to display every single caption while playing out the video. The experimental study evaluation shows that our proposal is effective in producing timecoded transcripts and therefore it can be helpful to expand video content accessibility
    corecore