33 research outputs found

    Exploring auditory-inspired acoustic features for room acoustic parameter estimation from monaural speech

    Get PDF
    Room acoustic parameters that characterize acoustic environments can help to improve signal enhancement algorithms such as for dereverberation, or automatic speech recognition by adapting models to the current parameter set. The reverberation time (RT) and the early-to-late reverberation ratio (ELR) are two key parameters. In this paper, we propose a blind ROom Parameter Estimator (ROPE) based on an artificial neural network that learns the mapping to discrete ranges of the RT and the ELR from single-microphone speech signals. Auditory-inspired acoustic features are used as neural network input, which are generated by a temporal modulation filter bank applied to the speech time-frequency representation. ROPE performance is analyzed in various reverberant environments in both clean and noisy conditions for both fullband and subband RT and ELR estimations. The importance of specific temporal modulation frequencies is analyzed by evaluating the contribution of individual filters to the ROPE performance. Experimental results show that ROPE is robust against different variations caused by room impulse responses (measured versus simulated), mismatched noise levels, and speech variability reflected through different corpora. Compared to state-of-the-art algorithms that were tested in the acoustic characterisation of environments (ACE) challenge, the ROPE model is the only one that is among the best for all individual tasks (RT and ELR estimation from fullband and subband signals). Improved fullband estimations are even obtained by ROPE when integrating speech-related frequency subbands. Furthermore, the model requires the least computational resources with a real time factor that is at least two times faster than competing algorithms. Results are achieved with an average observation window of 3 s, which is important for real-time applications

    Speech enhancement with frequency domain auto-regressive modeling

    Full text link
    Speech applications in far-field real world settings often deal with signals that are corrupted by reverberation. The task of dereverberation constitutes an important step to improve the audible quality and to reduce the error rates in applications like automatic speech recognition (ASR). We propose a unified framework of speech dereverberation for improving the speech quality and the ASR performance using the approach of envelope-carrier decomposition provided by an autoregressive (AR) model. The AR model is applied in the frequency domain of the sub-band speech signals to separate the envelope and carrier parts. A novel neural architecture based on dual path long short term memory (DPLSTM) model is proposed, which jointly enhances the sub-band envelope and carrier components. The dereverberated envelope-carrier signals are modulated and the sub-band signals are synthesized to reconstruct the audio signal back. The DPLSTM model for dereverberation of envelope and carrier components also allows the joint learning of the network weights for the down stream ASR task. In the ASR tasks on the REVERB challenge dataset as well as on the VOiCES dataset, we illustrate that the joint learning of speech dereverberation network and the E2E ASR model yields significant performance improvements over the baseline ASR system trained on log-mel spectrogram as well as other benchmarks for dereverberation (average relative improvements of 10-24% over the baseline system). The speech quality improvements, evaluated using subjective listening tests, further highlight the improved quality of the reconstructed audio.Comment: 10 page

    Joint estimation of reverberation time and early-to-late reverberation ratio from single-channel speech signals

    Get PDF
    The reverberation time (RT) and the early-to-late reverberation ratio (ELR) are two key parameters commonly used to characterize acoustic room environments. In contrast to conventional blind estimation methods that process the two parameters separately, we propose a model for joint estimation to predict the RT and the ELR simultaneously from single-channel speech signals from either full-band or sub-band frequency data, which is referred to as joint room parameter estimator (jROPE). An artificial neural network is employed to learn the mapping from acoustic observations to the RT and the ELR classes. Auditory-inspired acoustic features obtained by temporal modulation filtering of the speech time-frequency representations are used as input for the neural network. Based on an in-depth analysis of the dependency between the RT and the ELR, a two-dimensional (RT, ELR) distribution with constrained boundaries is derived, which is then exploited to evaluate four different configurations for jROPE. Experimental results show that-in comparison to the single-task ROPE system which individually estimates the RT or the ELR-jROPE provides improved results for both tasks in various reverberant and (diffuse) noisy environments. Among the four proposed joint types, the one incorporating multi-task learning with shared input and hidden layers yields the best estimation accuracies on average. When encountering extreme reverberant conditions with RTs and ELRs lying beyond the derived (RT, ELR) distribution, the type considering RT and ELR as a joint parameter performs robustly, in particular. From state-of-the-art algorithms that were tested in the acoustic characterization of environments challenge, jROPE achieves comparable results among the best for all individual tasks (RT and ELR estimation from full-band and sub-band signals)

    Estimation of room acoustic parameters: the ACE challenge

    No full text
    Reverberation Time (T60) and Direct-to-Reverberant Ratio (DRR) are important parameters which together can characterize sound captured by microphones in non-anechoic rooms. These parameters are important in speech processing applications such as speech recognition and dereverberation. The values of T60 and DRR can be estimated directly from the Acoustic Impulse Response (AIR) of the room. In practice, the AIR is not normally available, in which case these parameters must be estimated blindly from the observed speech in the microphone signal. The Acoustic Characterization of Environments (ACE) Challenge aimed to determine the state-of-the-art in blind acoustic parameter estimation and also to stimulate research in this area. A summary of the ACE Challenge, and the corpus used in the challenge is presented together with an analysis of the results. Existing algorithms were submitted alongside novel contributions, the comparative results for which are presented in this paper. The challenge showed that T60 estimation is a mature field where analytical approaches dominate whilst DRR estimation is a less mature field where machine learning approaches are currently more successful

    Estimation of Room Acoustic Parameters: The ACE Challenge

    Full text link

    Studies on noise robust automatic speech recognition

    Get PDF
    Noise in everyday acoustic environments such as cars, traffic environments, and cafeterias remains one of the main challenges in automatic speech recognition (ASR). As a research theme, it has received wide attention in conferences and scientific journals focused on speech technology. This article collection reviews both the classic and novel approaches suggested for noise robust ASR. The articles are literature reviews written for the spring 2009 seminar course on noise robust automatic speech recognition (course code T-61.6060) held at TKK

    Robuste Spracherkennung unter raumakustischen Umgebungsbedingungen

    Get PDF
    Bei der Überführung eines wissenschaftlichen Laborsystems zur automatischen Spracherkennung in eine reale Anwendung ergeben sich verschiedene praktische Problemstellungen, von denen eine der Verlust an Erkennungsleistung durch umgebende akustische Störungen ist. Im Gegensatz zu additiven Störungen wie Lüfterrauschen o. ä. hat die Wissenschaft bislang die Störung des Raumhalls bei der Spracherkennung nahezu ignoriert. Dabei besitzen, wie in der vorliegenden Dissertation deutlich gezeigt wird, bereits geringfügig hallende Räume einen stark störenden Einfluss auf die Leistungsfähigkeit von Spracherkennern. Mit dem Ziel, die Erkennungsleistung wieder in einen praktisch benutzbaren Bereich zu bringen, nimmt sich die Arbeit dieser Problemstellung an und schlägt Lösungen vor. Der Hintergrund der wissenschaftlichen Aktivitäten ist die Erstellung von funktionsfähigen Sprachbenutzerinterfaces für Gerätesteuerungen im Wohn- und Büroumfeld, wie z.~B. bei der Hausautomation. Aus diesem Grund werden praktische Randbedingungen wie die Restriktionen von embedded Computerplattformen in die Lösungsfindung einbezogen. Die Argumentation beginnt bei der Beschreibung der raumakustischen Umgebung und der Ausbreitung von Schallfeldern in Räumen. Es wird theoretisch gezeigt, dass die Störung eines Sprachsignals durch Hall von zwei Parametern abhängig ist: der Sprecher-Mikrofon-Distanz (SMD) und der Nachhallzeit T60. Um die Abhängigkeit der Erkennungsleistung vom Grad der Hallstörung zu ermitteln, wird eine Anzahl von Erkennungsexperimenten durchgeführt, die den Einfluss von T60 und SMD nachweisen. Weitere Experimente zeigen, dass die Spracherkennung kaum durch hochfrequente Hallanteile beeinträchtigt wird, wohl aber durch tieffrequente. In einer Literaturrecherche wird ein Überblick über den Stand der Technik zu Maßnahmen gegeben, die den störenden Einfluss des Halls unterdrücken bzw. kompensieren können. Jedoch wird auch gezeigt, dass, obwohl bei einigen Maßnahmen von Verbesserungen berichtet wird, keiner der gefundenen Ansätze den o. a. praktischen Einsatzbedingungen genügt. In dieser Arbeit wird die Methode Harmonicity-based Feature Analysis (HFA) vorgeschlagen. Sie basiert auf drei Ideen, die aus den Betrachtungen der vorangehenden Kapitel abgeleitet werden. Experimentelle Ergebnisse weisen die Verbesserung der Erkennungsleistung in halligen Umgebungen nach. Es werden sogar praktisch relevante Erkennungsraten erzielt, wenn die Methode mit verhalltem Training kombiniert wird. Die HFA wird gegen Ansätze aus der Literatur evaluiert, die ebenfalls praktischen Implementierungskriterien genügen. Auch Kombinationen der HFA und einigen dieser Ansätze werden getestet. Im letzten Kapitel werden die beiden Basistechnologien Stimm\-haft-Stimmlos-Entscheidung und Grundfrequenzdetektion umfangreich unter Hallbedingungen getestet, da sie Voraussetzung für die Funktionsfähigkeit der HFA sind. Als Ergebnis wird dargestellt, dass derzeit für beide Technologien kein Verfahren existiert, das unter Hallbedingungen robust arbeitet. Es kann allerdings gezeigt werden, dass die HFA trotz der Unsicherheiten der Verfahren arbeitet und signifikante Steigerungen der Erkennungsleistung erreicht.Automatic speech recognition (ASR) systems used in real-world indoor scenarios suffer from performance degradation if noise and reverberation conditions differ from the training conditions of the recognizer. This thesis deals with the problem of room reverberation as a cause of distortion in ASR systems. The background of this research is the design of practical command and control applications, such as a voice controlled light switch in rooms or similar applications. Therefore, the design aims to incorporate several restricting working conditions for the recognizer and still achieve a high level of robustness. One of those design restrictions is the minimisation of computational complexity to allow the practical implementation on an embedded processor. One chapter comprehensively describes the room acoustic environment, including the behavior of the sound field in rooms. It addresses the speaker room microphone (SRM) system which is expressed in the time domain as the room impulse response (RIR). The convolution of the RIR with the clean speech signal yields the reverberant signal at the microphone. A thorough analysis proposes that the degree of the distortion caused by reverberation is dependent on two parameters, the reverberation time T60 and the speaker-to-microphone distance (SMD). To evaluate the dependency of the recognition rate on the degree of distortion, a number of experiments has been successfully conducted, confirming the above mentioned dependency of the two parameters, T60 and SMD. Further experiments have shown that ASR is barely affected by high-frequency reverberation, whereas low frequency reverberation has a detrimental effect on the recognition rate. A literature survey concludes that, although several approaches exist which claim significant improvements, none of them fulfils the above mentioned practical implementation criteria. Within this thesis, a new approach entitled 'harmonicity-based feature analysis' (HFA) is proposed. It is based on three ideas that are derived in former chapters. Experimental results prove that HFA is able to enhance the recognition rate in reverberant environments. Even practical applicable results are achieved when HFA is combined with reverberant training. The method is further evaluated against three other approaches from the literature. Also combinations of methods are tested. In a last chapter the two base technologies fundamental frequency (F0) estimation and voiced unvoiced decision (VUD) are evaluated in reverberant environments, since they are necessary to run HFA. This evaluation aims to find one optimal method for each of these technologies. The results show that all F0 estimation methods and also the VUD methods have a strong decreasing performance in reverberant environments. Nevertheless it is shown that HFA is able to deal with uncertainties of these base technologies as such that the recognition performance still improves
    corecore