10 research outputs found

    Effects of Noise on a Speaker-Adaptive Statistical Speech Synthesis System

    Get PDF
    In this project we study the eff ects of noise on a speaker-adaptive HMM-based synthetic system based on the GlottHMM vocoder. The average voice model is trained with clean data, but it is adapted to the target speaker using speech samples that have been corrupted by arti ficially adding background noise to simulate low quality recordings. The synthesized speech played without background noise should not compromise the intelligibility or naturalness.A comparison is made to system based on the STRAIGHT vocoder when the background noise is babble noise. Both objective and subjective evaluation methods were conducted. GlottHMM is found to be less robust against severe noise. When the noise is less intrusive, the used objective measures gave contradictory results and no preference to either vocoder was shown in the listening tests. In the preference of moderate noise levels, GlottHMM performs as well as the STRAIGHT vocoder

    Robust Speaker-Adaptive HMM-Based Text-to-Speech Synthesis

    Full text link

    Secure Speech Biometric Templates

    Get PDF

    Intelligibility enhancement of synthetic speech in noise

    Get PDF
    EC Seventh Framework Programme (FP7/2007-2013)Speech technology can facilitate human-machine interaction and create new communication interfaces. Text-To-Speech (TTS) systems provide speech output for dialogue, notification and reading applications as well as personalized voices for people that have lost the use of their own. TTS systems are built to produce synthetic voices that should sound as natural, expressive and intelligible as possible and if necessary be similar to a particular speaker. Although naturalness is an important requirement, providing the correct information in adverse conditions can be crucial to certain applications. Speech that adapts or reacts to different listening conditions can in turn be more expressive and natural. In this work we focus on enhancing the intelligibility of TTS voices in additive noise. For that we adopt the statistical parametric paradigm for TTS in the shape of a hidden Markov model (HMM-) based speech synthesis system that allows for flexible enhancement strategies. Little is known about which human speech production mechanisms actually increase intelligibility in noise and how the choice of mechanism relates to noise type, so we approached the problem from another perspective: using mathematical models for hearing speech in noise. To find which models are better at predicting intelligibility of TTS in noise we performed listening evaluations to collect subjective intelligibility scores which we then compared to the models’ predictions. In these evaluations we observed that modifications performed on the spectral envelope of speech can increase intelligibility significantly, particularly if the strength of the modification depends on the noise and its level. We used these findings to inform the decision of which of the models to use when automatically modifying the spectral envelope of the speech according to the noise. We devised two methods, both involving cepstral coefficient modifications. The first was applied during extraction while training the acoustic models and the other when generating a voice using pre-trained TTS models. The latter has the advantage of being able to address fluctuating noise. To increase intelligibility of synthetic speech at generation time we proposed a method for Mel cepstral coefficient modification based on the glimpse proportion measure, the most promising of the models of speech intelligibility that we evaluated. An extensive series of listening experiments demonstrated that this method brings significant intelligibility gains to TTS voices while not requiring additional recordings of clear or Lombard speech. To further improve intelligibility we combined our method with noise-independent enhancement approaches based on the acoustics of highly intelligible speech. This combined solution was as effective for stationary noise as for the challenging competing speaker scenario, obtaining up to 4dB of equivalent intensity gain. Finally, we proposed an extension to the speech enhancement paradigm to account for not only energetic masking of signals but also for linguistic confusability of words in sentences. We found that word level confusability, a challenging value to predict, can be used as an additional prior to increase intelligibility even for simple enhancement methods like energy reallocation between words. These findings motivate further research into solutions that can tackle the effect of energetic masking on the auditory system as well as on higher levels of processing

    Recognizing Human Faces: Physical Modeling and Pattern Classification

    Get PDF
    Although significant work has been done in the field of face recognition, the performance of the state-of-the-art face recognition algorithms is not good enough to be effective in operational systems. Most algorithms work well for controlled images but are quite susceptible to changes in illumination, pose, etc. In this dissertation, we propose methods which address these issues, to recognize faces in more realistic scenarios. The developed approaches show the importance of physical modeling, contextual constraints and pattern classification for this task. For still image-based face recognition, we develop an algorithm to recognize faces illuminated by arbitrarily placed, multiple light sources, given just a single image. Though the problem is ill-posed in its generality, linear approximations to the subspace of Lambertian images in combination with rank constraints on unknown facial shape and albedo are used to make it tractable. In addition, we develop a purely geometric illumination-invariant matching algorithm that makes use of the bilateral symmetry of human faces. In particular, we prove that the set of images of bilaterally symmetric objects can be partitioned into equivalence classes such that it is always possible to distinguish between two objects belonging to different equivalence classes using just one image per object. For recognizing faces in videos, the challenge lies in suitable characterization of faces using the information available in the video. We propose a method that models a face as a linear dynamical system whose appearance changes with pose. Though the proposed method performs very well on the available datasets, it does not explicitly take the 3D structure or illumination conditions into account. To address these issues, we propose an algorithm to perform 3D facial pose tracking in videos. The approach combines the structural advantages of geometric modeling with the statistical advantages of a particle filter based inference to recover the 3D configuration of facial features in each frame of the video. The recovered 3D configuration parameters are further used to recognize faces in videos. From a pattern classification point of view, automatic face recognition presents a very unique challenge due to the presence of just one (or a few) sample(s) per identity. To address this, we develop a cohort-based framework that makes use of the large number of non-match samples present in the database to improve verification and identification performance

    Transformation de l'intonation : application à la synthèse de la parole et à la transformation de voix

    Get PDF
    The work presented in this thesis lies within the scope of prosody conversion and more particularly the fundamental frequency conversion which is considered as a prominent factor in prosody processing. This document deals with the different steps necessary to build such a conversion system : stylization, clustering and conversion of melodic contours. For each step, we propose a methodology that takes into account the issues and difficulties encountered in the previous one. A B-spline based approach is first proposed to model the melodic contours. Then to represent the melodic space of a speaker, a HMM based approach is introduced. To finish, a prosody transformation methodology using non-parallel corpora based on a speaker adaptation technique is derived. The results we obtain tend to show that it is necessary to model the evolution of the melody and to drive the transformation system by using morpho-syntactic information.Les travaux de cette thèse se situent dans le cadre de la transformation de la prosodie en se focalisant sur la fréquence fondamentale, F0, facteur jugé proéminent dans le traitement de la prosodie. En particulier, nous nous intéressons aux différentes étapes nécessaires à la construction d'un tel système : la stylisation, la classification et la transformation des contours mélodiques. Pour chaque étape, nous proposons une méthodologie qui tient compte des problèmes qui se sont posés à l'étape précédente. Tout d'abord, un modèle B-spline est proposé pour la stylisation des contours mélodiques. Ensuite, pour représenter l'espace mélodique du locuteur, une approche par modèles de Markov est introduite. Enfin, une méthodologie de transformation de la prosodie à partir de corpus non parallèles par une technique d'adaptation au locuteur est présentée. Les résultats obtenus tendent à montrer qu'il est nécessaire de traiter la dynamique du F0 et de piloter la transformation par des informations d'ordre morphosyntaxique

    Communicative prosody generation using impression attributes of lexicons

    Get PDF
    制度:新 ; 報告番号:甲3173号 ; 学位の種類:博士(国際情報通信学) ; 授与年月日:2010/9/29 ; 早大学位記番号:新546
    corecore