201 research outputs found

    Modified SPLICE and its Extension to Non-Stereo Data for Noise Robust Speech Recognition

    Full text link
    In this paper, a modification to the training process of the popular SPLICE algorithm has been proposed for noise robust speech recognition. The modification is based on feature correlations, and enables this stereo-based algorithm to improve the performance in all noise conditions, especially in unseen cases. Further, the modified framework is extended to work for non-stereo datasets where clean and noisy training utterances, but not stereo counterparts, are required. Finally, an MLLR-based computationally efficient run-time noise adaptation method in SPLICE framework has been proposed. The modified SPLICE shows 8.6% absolute improvement over SPLICE in Test C of Aurora-2 database, and 2.93% overall. Non-stereo method shows 10.37% and 6.93% absolute improvements over Aurora-2 and Aurora-4 baseline models respectively. Run-time adaptation shows 9.89% absolute improvement in modified framework as compared to SPLICE for Test C, and 4.96% overall w.r.t. standard MLLR adaptation on HMMs.Comment: Submitted to Automatic Speech Recognition and Understanding (ASRU) 2013 Worksho

    A Bayesian Network View on Acoustic Model-Based Techniques for Robust Speech Recognition

    Full text link
    This article provides a unifying Bayesian network view on various approaches for acoustic model adaptation, missing feature, and uncertainty decoding that are well-known in the literature of robust automatic speech recognition. The representatives of these classes can often be deduced from a Bayesian network that extends the conventional hidden Markov models used in speech recognition. These extensions, in turn, can in many cases be motivated from an underlying observation model that relates clean and distorted feature vectors. By converting the observation models into a Bayesian network representation, we formulate the corresponding compensation rules leading to a unified view on known derivations as well as to new formulations for certain approaches. The generic Bayesian perspective provided in this contribution thus highlights structural differences and similarities between the analyzed approaches

    Speech Recognition in Unknown Noisy Conditions

    Get PDF

    Robust speech recognition under band-limited channels and other channel distortions

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, junio de 200

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    Cultural Context-Aware Models and IT Applications for the Exploitation of Musical Heritage

    Get PDF
    Information engineering has always expanded its scope by inspiring innovation in different scientific disciplines. In particular, in the last sixty years, music and engineering have forged a strong connection in the discipline known as “Sound and Music Computing”. Musical heritage is a paradigmatic case that includes several multi-faceted cultural artefacts and traditions. Several issues arise from the analog-digital transfer of cultural objects, concerning their creation, preservation, access, analysis and experiencing. The keystone is the relationship of these digitized cultural objects with their carrier and cultural context. The terms “cultural context” and “cultural context awareness” are delineated, alongside the concepts of contextual information and metadata. Since they maintain the integrity of the object, its meaning and cultural context, their role is critical. This thesis explores three main case studies concerning historical audio recordings and ancient musical instruments, aiming to delineate models to preserve, analyze, access and experience the digital versions of these three prominent examples of musical heritage. The first case study concerns analog magnetic tapes, and, in particular, tape music, a particular experimental music born in the second half of the XX century. This case study has relevant implications from the musicology, philology and archivists’ points of view, since the carrier has a paramount role and the tight connection with its content can easily break during the digitization process or the access phase. With the aim to help musicologists and audio technicians in their work, several tools based on Artificial Intelligence are evaluated in tasks such as the discontinuity detection and equalization recognition. By considering the peculiarities of tape music, the philological problem of stemmatics in digitized audio documents is tackled: an algorithm based on phylogenetic techniques is proposed and assessed, confirming the suitability of these techniques for this task. Then, a methodology for a historically faithful access to digitized tape music recordings is introduced, by considering contextual information and its relationship with the carrier and the replay device. Based on this methodology, an Android app which virtualizes a tape recorder is presented, together with its assessment. Furthermore, two web applications are proposed to faithfully experience digitized 78 rpm discs and magnetic tape recordings, respectively. Finally, a prototype of web application for musicological analysis is presented. This aims to concentrate relevant part of the knowledge acquired in this work into a single interface. The second case study is a corpus of Arab-Andalusian music, suitable for computational research, which opens new opportunities to musicological studies by applying data-driven analysis. The description of the corpus is based on the five criteria formalized in the CompMusic project of the University Pompeu Fabra of Barcelona: purpose, coverage, completeness, quality and re-usability. Four Jupyter notebooks were developed with the aim to provide a useful tool for computational musicologists for analyzing and using data and metadata of such corpus. The third case study concerns an exceptional historical musical instrument: an ancient Pan flute exhibited at the Museum of Archaeological Sciences and Art of the University of Padova. The final objective was the creation of a multimedia installation to valorize this precious artifact and to allow visitors to interact with the archaeological find and to learn its history. The case study provided the opportunity to study a methodology suitable for the valorization of this ancient musical instrument, but also extendible to other artifacts or museum collections. Both the methodology and the resulting multimedia installation are presented, followed by the assessment carried out by a multidisciplinary group of experts

    Soft margin estimation for automatic speech recognition

    Get PDF
    In this study, a new discriminative learning framework, called soft margin estimation (SME), is proposed for estimating the parameters of continuous density hidden Markov models (HMMs). The proposed method makes direct use of the successful ideas of margin in support vector machines to improve generalization capability and decision feedback learning in discriminative training to enhance model separation in classifier design. SME directly maximizes the separation of competing models to enhance the testing samples to approach a correct decision if the deviation from training samples is within a safe margin. Frame and utterance selections are integrated into a unified framework to select the training utterances and frames critical for discriminating competing models. SME offers a flexible and rigorous framework to facilitate the incorporation of new margin-based optimization criteria into HMMs training. The choice of various loss functions is illustrated and different kinds of separation measures are defined under a unified SME framework. SME is also shown to be able to jointly optimize feature extraction and HMMs. Both the generalized probabilistic descent algorithm and the Extended Baum-Welch algorithm are applied to solve SME. SME has demonstrated its great advantage over other discriminative training methods in several speech recognition tasks. Tested on the TIDIGITS digit recognition task, the proposed SME approach achieves a string accuracy of 99.61%, the best result ever reported in literature. On the 5k-word Wall Street Journal task, SME reduced the word error rate (WER) from 5.06% of MLE models to 3.81%, with relative 25% WER reduction. This is the first attempt to show the effectiveness of margin-based acoustic modeling for large vocabulary continuous speech recognition in a HMMs framework. The generalization of SME was also well demonstrated on the Aurora 2 robust speech recognition task, with around 30% relative WER reduction from the clean-trained baseline.Ph.D.Committee Chair: Dr. Chin-Hui Lee; Committee Member: Dr. Anthony Joseph Yezzi; Committee Member: Dr. Biing-Hwang (Fred) Juang; Committee Member: Dr. Mark Clements; Committee Member: Dr. Ming Yua

    Synergy of Acoustic-Phonetics and Auditory Modeling Towards Robust Speech Recognition

    Get PDF
    The problem addressed in this work is that of enhancing speech signals corrupted by additive noise and improving the performance of automatic speech recognizers in noisy conditions. The enhanced speech signals can also improve the intelligibility of speech in noisy conditions for human listeners with hearing impairment as well as for normal listeners. The original Phase Opponency (PO) model, proposed to detect tones in noise, simulates the processing of the information in neural discharge times and exploits the frequency-dependent phase properties of the tuned filters in the auditory periphery along with the cross-auditory-nerve-fiber coincidence detection to extract temporal cues. The Modified Phase Opponency (MPO) proposed here alters the components of the PO model in such a way that the basic functionality of the PO model is maintained but the various properties of the model can be analyzed and modified independently of each other. This work presents a detailed mathematical formulation of the MPO model and the relation between the properties of the narrowband signal that needs to be detected and the properties of the MPO model. The MPO speech enhancement scheme is based on the premise that speech signals are composed of a combination of narrow band signals (i.e. harmonics) with varying amplitudes. The MPO enhancement scheme outperforms many of the other speech enhancement techniques when evaluated using different objective quality measures. Automatic speech recognition experiments show that replacing noisy speech signals by the corresponding MPO-enhanced speech signals leads to an improvement in the recognition accuracies at low SNRs. The amount of improvement varies with the type of the corrupting noise. Perceptual experiments indicate that: (a) there is little perceptual difference in the MPO-processed clean speech signals and the corresponding original clean signals and (b) the MPO-enhanced speech signals are preferred over the output of the other enhancement methods when the speech signals are corrupted by subway noise but the outputs of the other enhancement schemes are preferred when the speech signals are corrupted by car noise

    Computational Multimedia for Video Self Modeling

    Get PDF
    Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of oneself. This is the idea behind the psychological theory of self-efficacy - you can learn or model to perform certain tasks because you see yourself doing it, which provides the most ideal form of behavior modeling. The effectiveness of VSM has been demonstrated for many different types of disabilities and behavioral problems ranging from stuttering, inappropriate social behaviors, autism, selective mutism to sports training. However, there is an inherent difficulty associated with the production of VSM material. Prolonged and persistent video recording is required to capture the rare, if not existed at all, snippets that can be used to string together in forming novel video sequences of the target skill. To solve this problem, in this dissertation, we use computational multimedia techniques to facilitate the creation of synthetic visual content for self-modeling that can be used by a learner and his/her therapist with a minimum amount of training data. There are three major technical contributions in my research. First, I developed an Adaptive Video Re-sampling algorithm to synthesize realistic lip-synchronized video with minimal motion jitter. Second, to denoise and complete the depth map captured by structure-light sensing systems, I introduced a layer based probabilistic model to account for various types of uncertainties in the depth measurement. Third, I developed a simple and robust bundle-adjustment based framework for calibrating a network of multiple wide baseline RGB and depth cameras
    • …
    corecore