838 research outputs found

    On the Combination of Auditory and Modulation Frequency Channels for ASR applications

    Get PDF
    This paper investigates the combination of evidence coming from different frequency channels obtained filtering the speech signal at different auditory and modulation frequencies. In our previous work \cite{icassp2008}, we showed that combination of classifiers trained on different ranges of {\it modulation} frequencies is more effective if performed in sequential (hierarchical) fashion. In this work we verify that combination of classifiers trained on different ranges of {\it auditory} frequencies is more effective if performed in parallel fashion. Furthermore we propose an architecture based on neural networks for combining evidence coming from different auditory-modulation frequency sub-bands that takes advantages of previous findings. This reduces the final WER by 6.2\% (from 45.8\% to 39.6\%) w.r.t the single classifier approach in a LVCSR task

    Speech and crosstalk detection in multichannel audio

    Get PDF
    The analysis of scenarios in which a number of microphones record the activity of speakers, such as in a round-table meeting, presents a number of computational challenges. For example, if each participant wears a microphone, speech from both the microphone's wearer (local speech) and from other participants (crosstalk) is received. The recorded audio can be broadly classified in four ways: local speech, crosstalk plus local speech, crosstalk alone and silence. We describe two experiments related to the automatic classification of audio into these four classes. The first experiment attempted to optimize a set of acoustic features for use with a Gaussian mixture model (GMM) classifier. A large set of potential acoustic features were considered, some of which have been employed in previous studies. The best-performing features were found to be kurtosis, "fundamentalness," and cross-correlation metrics. The second experiment used these features to train an ergodic hidden Markov model classifier. Tests performed on a large corpus of recorded meetings show classification accuracies of up to 96%, and automatic speech recognition performance close to that obtained using ground truth segmentation

    Acoustic Simulations of Cochlear Implants in Human and Machine Hearing Research

    Get PDF

    Band-pass filtering of the time sequences of spectral parameters for robust wireless speech recognition

    Get PDF
    In this paper we address the problem of automatic speech recognition when wireless speech communication systems are involved. In this context, three main sources of distortion should be considered: acoustic environment, speech coding and transmission errors. Whilst the first one has already received a lot of attention, the last two deserve further investigation in our opinion. We have found out that band-pass filtering of the recognition features improves ASR performance when distortions due to these particular communication systems are present. Furthermore, we have evaluated two alternative configurations at different bit error rates (BER) typical of these channels: band-pass filtering the LP-MFCC parameters or a modification of the RASTA-PLP using a sharper low-pass section perform consistently better than LP-MFCC and RASTA-PLP, respectively.Publicad

    Implications of modulation filterbank processing for automatic speech recognition

    Get PDF
    In questa tesi due diversi modelli del sistema uditivo sono stati usati come metodo di estrazione di caratteristiche per un software di riconoscimento vocale. L'estrazione Ú stata operata utilizzando due modelli percettivi, originariamente implementati per simulare i risultati di diversi test psicoacustici (Dau et al. 1996a, 1997a). Il principale interesse Ú rivolto allo stadio di modulazione temporale dei modelli, poiché in diverse ricerche sono state trovate prove a sostegno dell'importanza di questo stadio (e. g. Drullman et al., 1994a,b; Drullman, 1995). La raccolta di maggiori informazioni riguardanti l'importanza delle modulazioni temporali all'interno di un framework di riconoscimento vocale, potrebbe portare ad una miglior comprensione del complesso meccanismo di analisi/riconoscimento vocale operata dal sistema uditivo umano. I modelli sono stati testati in diverse condizioni, ricavate da registrazioni sonore di materiale standard per lo studio di performance di algoritmi di riconoscimento vocale. In aggiunta, Ú stato fatto un tentativo di replicare i risultati ottenuti da Kanedera et. al (1999) atto a validare i risultati riguardo l'importanza percettiva delle diverse bande nel dominio della modulazione in frequenz

    Features of hearing: applications of machine learning to uncover the building blocks of hearing

    Get PDF
    Recent advances in machine learning have instigated a renewed interest in using machine learning approaches to better understand human sensory processing. This line of research is particularly interesting for speech research since speech comprehension is uniquely human, which complicates obtaining detailed neural recordings. In this thesis, I explore how machine learning can be used to uncover new knowledge about the auditory system, with a focus on discovering robust auditory features. The resulting increased understanding of the noise robustness of human hearing may help to better assist those with hearing loss and improve Automatic Speech Recognition (ASR) systems. First, I show how computational neuroscience and machine learning can be combined to generate hypotheses about auditory features. I introduce a neural feature detection model with a modest number of parameters that is compatible with auditory physiology. By testing feature detector variants in a speech classification task, I confirm the importance of both well-studied and lesser-known auditory features. Second, I investigate whether ASR software is a good candidate model of the human auditory system. By comparing several state-of-the-art ASR systems to the results from humans on a range of psychometric experiments, I show that these ASR systems diverge markedly from humans in at least some psychometric tests. This implies that none of these systems act as a strong proxy for human speech recognition, although some may be useful when asking more narrowly defined questions. For neuroscientists, this thesis exemplifies how machine learning can be used to generate new hypotheses about human hearing, while also highlighting the caveats of investigating systems that may work fundamentally differently from the human brain. For machine learning engineers, I point to tangible directions for improving ASR systems. To motivate the continued cross-fertilization between these fields, a toolbox that allows researchers to assess new ASR systems has been released.Open Acces

    Hierarchical and Parallel Processing of Modulation Spectrum for ASR applications

    Get PDF
    The modulation spectrum is an efficient representation for describing dynamic information in signals. In this work we investigate how to exploit different elements of the modulation spectrum for extraction of information in automatic recognition of speech (ASR). Parallel and hierarchical (sequential) approaches are investigated. Parallel processing combines outputs of independent classifiers applied to different modulation frequency channels. Hierarchical processing uses different modulation frequency channels sequentially. Experiments are run on a LVCSR task for meetings transcription and results are reported on the RT05 evaluation data. Processing modulation frequencies channels with different classifiers provides a consistent reduction in WER (2\% absolute w.r.t. PLP baseline). Hierarchical processing outperforms parallel processing. The largest WER reduction is obtained trough sequential processing moving from high to low modulation frequencies. This model is consistent with several perceptual and physiological studies on auditory processing

    Pitch perception and cochlear implants

    Get PDF
    • 

    corecore