22,608 research outputs found

    Speech and hand transcribed retrieval

    Get PDF
    This paper describes the issues and preliminary work involved in the creation of an information retrieval system that will manage the retrieval from collections composed of both speech recognised and ordinary text documents. In previous work, it has been shown that because of recognition errors, ordinary documents are generally retrieved in preference to recognised ones. Means of correcting or eliminating the observed bias is the subject of this paper. Initial ideas and some preliminary results are presented

    音声分割フロントエンドを用いた多人数参加による音声書き起こしの枠組み

    Get PDF
    A wide variety of digital contents with speech data are appearing on the internet such as podcasts and videos. Text information transcribed from these contents is not only essential to people with hearing impaired, but also expected to add new service values to the technologies of information retrieval and data mining. On the other hand, the work of transcribing speech is intrinsically labor-intensive and its automation by using speech recognition still has a limitation in the variety of contents. In this paper, a framework of speech transcription comprised of speech processing by computer and human labor aggregated from a large number of participants is presented. The feasibility of the framework is demonstrated through experiments for the core part, speech decomposition front-end, by using various types of digital contents.A wide variety of digital contents with speech data are appearing on the internet such as podcasts and videos. Text information transcribed from these contents is not only essential to people with hearing impaired, but also expected to add new service values to the technologies of information retrieval and data mining. On the other hand, the work of transcribing speech is intrinsically labor-intensive and its automation by using speech recognition still has a limitation in the variety of contents. In this paper, a framework of speech transcription comprised of speech processing by computer and human labor aggregated from a large number of participants is presented. The feasibility of the framework is demonstrated through experiments for the core part, speech decomposition front-end, by using various types of digital contents

    Search of spoken documents retrieves well recognized transcripts

    Get PDF
    This paper presents a series of analyses and experiments on spoken document retrieval systems: search engines that retrieve transcripts produced by speech recognizers. Results show that transcripts that match queries well tend to be recognized more accurately than transcripts that match a query less well. This result was described in past literature, however, no study or explanation of the effect has been provided until now. This paper provides such an analysis showing a relationship between word error rate and query length. The paper expands on past research by increasing the number of recognitions systems that are tested as well as showing the effect in an operational speech retrieval system. Potential future lines of enquiry are also described

    A Cross-media Retrieval System for Lecture Videos

    Full text link
    We propose a cross-media lecture-on-demand system, in which users can selectively view specific segments of lecture videos by submitting text queries. Users can easily formulate queries by using the textbook associated with a target lecture, even if they cannot come up with effective keywords. Our system extracts the audio track from a target lecture video, generates a transcription by large vocabulary continuous speech recognition, and produces a text index. Experimental results showed that by adapting speech recognition to the topic of the lecture, the recognition accuracy increased and the retrieval accuracy was comparable with that obtained by human transcription

    Language Modeling for Multi-Domain Speech-Driven Text Retrieval

    Full text link
    We report experimental results associated with speech-driven text retrieval, which facilitates retrieving information in multiple domains with spoken queries. Since users speak contents related to a target collection, we produce language models used for speech recognition based on the target collection, so as to improve both the recognition and retrieval accuracy. Experiments using existing test collections combined with dictated queries showed the effectiveness of our method

    Access to recorded interviews: A research agenda

    Get PDF
    Recorded interviews form a rich basis for scholarly inquiry. Examples include oral histories, community memory projects, and interviews conducted for broadcast media. Emerging technologies offer the potential to radically transform the way in which recorded interviews are made accessible, but this vision will demand substantial investments from a broad range of research communities. This article reviews the present state of practice for making recorded interviews available and the state-of-the-art for key component technologies. A large number of important research issues are identified, and from that set of issues, a coherent research agenda is proposed

    Symbolic inductive bias for visually grounded learning of spoken language

    Full text link
    A widespread approach to processing spoken language is to first automatically transcribe it into text. An alternative is to use an end-to-end approach: recent works have proposed to learn semantic embeddings of spoken language from images with spoken captions, without an intermediate transcription step. We propose to use multitask learning to exploit existing transcribed speech within the end-to-end setting. We describe a three-task architecture which combines the objectives of matching spoken captions with corresponding images, speech with text, and text with images. We show that the addition of the speech/text task leads to substantial performance improvements on image retrieval when compared to training the speech/image task in isolation. We conjecture that this is due to a strong inductive bias transcribed speech provides to the model, and offer supporting evidence for this.Comment: ACL 201
    corecore