230 research outputs found

    Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments

    Get PDF
    Eliminating the negative effect of non-stationary environmental noise is a long-standing research topic for automatic speech recognition that stills remains an important challenge. Data-driven supervised approaches, including ones based on deep neural networks, have recently emerged as potential alternatives to traditional unsupervised approaches and with sufficient training, can alleviate the shortcomings of the unsupervised methods in various real-life acoustic environments. In this light, we review recently developed, representative deep learning approaches for tackling non-stationary additive and convolutional degradation of speech with the aim of providing guidelines for those involved in the development of environmentally robust speech recognition systems. We separately discuss single- and multi-channel techniques developed for the front-end and back-end of speech recognition systems, as well as joint front-end and back-end training frameworks

    Direction-Aware Adaptive Online Neural Speech Enhancement with an Augmented Reality Headset in Real Noisy Conversational Environments

    Full text link
    This paper describes the practical response- and performance-aware development of online speech enhancement for an augmented reality (AR) headset that helps a user understand conversations made in real noisy echoic environments (e.g., cocktail party). One may use a state-of-the-art blind source separation method called fast multichannel nonnegative matrix factorization (FastMNMF) that works well in various environments thanks to its unsupervised nature. Its heavy computational cost, however, prevents its application to real-time processing. In contrast, a supervised beamforming method that uses a deep neural network (DNN) for estimating spatial information of speech and noise readily fits real-time processing, but suffers from drastic performance degradation in mismatched conditions. Given such complementary characteristics, we propose a dual-process robust online speech enhancement method based on DNN-based beamforming with FastMNMF-guided adaptation. FastMNMF (back end) is performed in a mini-batch style and the noisy and enhanced speech pairs are used together with the original parallel training data for updating the direction-aware DNN (front end) with backpropagation at a computationally-allowable interval. This method is used with a blind dereverberation method called weighted prediction error (WPE) for transcribing the noisy reverberant speech of a speaker, which can be detected from video or selected by a user's hand gesture or eye gaze, in a streaming manner and spatially showing the transcriptions with an AR technique. Our experiment showed that the word error rate was improved by more than 10 points with the run-time adaptation using only twelve minutes of observation.Comment: IEEE/RSJ IROS 202

    SD-TEAM: Interactive Learning, Self-Evaluation and Multimodal Technologies for Multidomain Spoken Dialog Systems

    Get PDF
    Speech technology currently supports the development of dialogue systems that function in limited domains for which they were trained and in conditions for which they were designed, that is, specific acoustic conditions, speakers etc. The international scientific community has made significant efforts in exploring methods for adaptation to different acoustic contexts, tasks and types of user. However, further work is needed to produce multimodal spoken dialogue systems capable of exploiting interactivity to learn online in order to improve their performance. The goal is to produce flexible and dynamic multimodal, interactive systems based on spoken communication, capable of detecting automatically their operating conditions and especially of learning from user interactions and experience through evaluating their own performance. Such ?living? systems will evolve continuously and without supervision until user satisfaction is achieved. Special attention will be paid to those groups of users for which adaptation and personalisation is essential: amongst others, people with disabilities which lead to communication difficulties (hearing loss, dysfluent speech, ...), mobility problems and non-native users. In this context, the SD-TEAM Project aims to advance the development of technologies for interactive learning and evaluation. In addition, it will develop flexible distributed architectures that allow synergistic interaction between processing modules from a variety of dialogue systems designed for distinct tasks, user groups, acoustic conditions, etc. These technologies will be demonstrated via multimodal dialogue systems to access to services from home and to access to unstructured information, based on the multi-domain systems developed in the previous project TIN2005-08660-C04

    Studies on noise robust automatic speech recognition

    Get PDF
    Noise in everyday acoustic environments such as cars, traffic environments, and cafeterias remains one of the main challenges in automatic speech recognition (ASR). As a research theme, it has received wide attention in conferences and scientific journals focused on speech technology. This article collection reviews both the classic and novel approaches suggested for noise robust ASR. The articles are literature reviews written for the spring 2009 seminar course on noise robust automatic speech recognition (course code T-61.6060) held at TKK
    corecore