8,657 research outputs found

    Adapting End-to-End Speech Recognition for Readable Subtitles

    Full text link
    Automatic speech recognition (ASR) systems are primarily evaluated on transcription accuracy. However, in some use cases such as subtitling, verbatim transcription would reduce output readability given limited screen size and reading time. Therefore, this work focuses on ASR with output compression, a task challenging for supervised approaches due to the scarcity of training data. We first investigate a cascaded system, where an unsupervised compression model is used to post-edit the transcribed speech. We then compare several methods of end-to-end speech recognition under output length constraints. The experiments show that with limited data far less than needed for training a model from scratch, we can adapt a Transformer-based ASR model to incorporate both transcription and compression capabilities. Furthermore, the best performance in terms of WER and ROUGE scores is achieved by explicitly modeling the length constraints within the end-to-end ASR system.Comment: IWSLT 202

    Access to recorded interviews: A research agenda

    Get PDF
    Recorded interviews form a rich basis for scholarly inquiry. Examples include oral histories, community memory projects, and interviews conducted for broadcast media. Emerging technologies offer the potential to radically transform the way in which recorded interviews are made accessible, but this vision will demand substantial investments from a broad range of research communities. This article reviews the present state of practice for making recorded interviews available and the state-of-the-art for key component technologies. A large number of important research issues are identified, and from that set of issues, a coherent research agenda is proposed

    A MACHINE LEARNING FRAMEWORK FOR AUTOMATIC SPEECH RECOGNITION IN AIR TRAFFIC CONTROL USING WORD LEVEL BINARY CLASSIFICATION AND TRANSCRIPTION

    Get PDF
    Advances in Artificial Intelligence and Machine learning have enabled a variety of new technologies. One such technology is Automatic Speech Recognition (ASR), where a machine is given audio and transcribes the words that were spoken. ASR can be applied in a variety of domains to improve general usability and safety. One such domain is Air Traffic Control (ATC). ASR in ATC promises to improve safety in a mission critical environment. ASR models have historically required a large amount of clean training data. ATC environments are noisy and acquiring labeled data is a difficult, expertise dependent task. This thesis attempts to solve these problems by presenting a machine learning framework which uses word-by-word audio samples to transcribe ATC speech. Instead of transcribing an entire speech sample, this framework transcribes every word individually. Then, overall transcription is pieced together based on the word sequence. Each stage of the framework is trained and tested independently of one another, and the overall performance is gauged. The overall framework was gauged to be a feasible approach to ASR in ATC

    Overview of VideoCLEF 2008: Automatic generation of topic-based feeds for dual language audio-visual content

    Get PDF
    The VideoCLEF track, introduced in 2008, aims to develop and evaluate tasks related to analysis of and access to multilingual multimedia content. In its first year, VideoCLEF piloted the Vid2RSS task, whose main subtask was the classification of dual language video (Dutchlanguage television content featuring English-speaking experts and studio guests). The task offered two additional discretionary subtasks: feed translation and automatic keyframe extraction. Task participants were supplied with Dutch archival metadata, Dutch speech transcripts, English speech transcripts and 10 thematic category labels, which they were required to assign to the test set videos. The videos were grouped by class label into topic-based RSS-feeds, displaying title, description and keyframe for each video. Five groups participated in the 2008 VideoCLEF track. Participants were required to collect their own training data; both Wikipedia and general web content were used. Groups deployed various classifiers (SVM, Naive Bayes and k-NN) or treated the problem as an information retrieval task. Both the Dutch speech transcripts and the archival metadata performed well as sources of indexing features, but no group succeeded in exploiting combinations of feature sources to significantly enhance performance. A small scale fluency/adequacy evaluation of the translation task output revealed the translation to be of sufficient quality to make it valuable to a non-Dutch speaking English speaker. For keyframe extraction, the strategy chosen was to select the keyframe from the shot with the most representative speech transcript content. The automatically selected shots were shown, with a small user study, to be competitive with manually selected shots. Future years of VideoCLEF will aim to expand the corpus and the class label list, as well as to extend the track to additional tasks

    Deep learning for speech to text transcription for the portuguese language

    Get PDF
    Automatic speech recognition (ASR) is the process of transcribing audio recordings into text, i.e. to transform speech into the respective sequence of words. This process is also commonly known as speechto- text. Machine learning (ML), the ability of machines to learn from examples, is one of the most relevant areas of artificial intelligence in today’s world. Deep learning is a subset of ML which makes use of Deep Neural Networks, a particular type of Artificial Neural Networks (ANNs), which are intended to mimic human neurons, that possess a large number of layers. This dissertation reviews the state-of-the-art on automatic speech recognition throughout time, from early systems which used Hidden Markov Models (HMMs) and Gaussian Mixture Models (GMMs) to the most up-to-date end-to-end (E2E) deep neural models. Considering the context of the present work, some deep learning algorithms used in state-of-the-art approaches are explained in additional detail. The current work aims to develop an ASR system for the European Portuguese language using deep learning. This is achieved by implementing a pipeline composed of stages responsible for data acquisition, data analysis, data pre-processing, model creation and evaluation of results. With the NVIDIA NeMo framework was possible to implement the QuartzNet15x5 architecture based on 1D time-channel separable convolutions. Following a data-centric methodology, the model developed yielded state-of-the-art Word Error Rate (WER) results of WER = 0.0503; Sumário: Aprendizagem profunda para transcrição de fala para texto para a Língua Portuguesa - O reconhecimento automático de fala (ASR) é o processo de transcrever gravações de áudio em texto, i.e., transformar a fala na respectiva sequência de palavras. Esse processo também é comumente conhecido como speech-to-text. A aprendizagem de máquina (ML), a capacidade das máquinas de aprenderem através de exemplos, é um dos campos mais relevantes da inteligência artificial no mundo atual. Deep learning é um subconjunto de ML que faz uso de Redes Neurais Profundas, um tipo particular de Redes Neurais Artificiais (ANNs), que se destinam a imitar neurónios humanos, que possuem um grande número de camadas Esta dissertação faz uma revisão ao estado da arte do reconhecimento automático de fala ao longo do tempo, desde os primeiros sistemas que usavam Hidden Markov Models (HMMs) e Gaussian Mixture Models (GMMs até sistemas end-to-end (E2E) mais recentes que usam modelos neuronais profundos. Considerando o contexto do presente trabalho, alguns algoritmos de aprendizagem profunda usados em abordagens de ponta são explicados mais detalhadamente. O presente trabalho tem como objetivo desenvolver um sistema ASR para a língua portuguesa europeia utilizando deep learning. Isso é conseguido por meio da implementação de um pipeline composto por etapas responsáveis pela aquisição de dados, análise dos dados, pré-processamento dos dados, criação do modelo e avaliação dos resultados. Com o framework NVIDIA NeMo foi possível implementar a arquitetura QuartzNet15x5 baseada em convoluções 1D separáveis por canal de tempo. Seguindo uma metodologia centrada em dados, o modelo desenvolvido produziu resultados de taxa de erro de palavra (WER) semelhantes aos de estado da arte de WER = 0.0503

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR
    corecore