173 research outputs found

    Reactive Statistical Mapping: Towards the Sketching of Performative Control with Data

    Get PDF
    Part 1: Fundamental IssuesInternational audienceThis paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more precisely how to use Gaussian Mixture Models and Hidden Markov Models for realtime and reactive generation of new trajectories from inputted labels and for realtime regression in a continuous-to-continuous use case. As a result, we have developed several proofs of concept, including an incremental speech synthesiser, a software for exploring stylistic spaces for gait and facial motion in realtime, a reactive audiovisual laughter and a prototype demonstrating the realtime reconstruction of lower body gait motion strictly from upper body motion, with conservation of the stylistic properties. This project has been the opportunity to formalise HMM-based mapping, integrate various of these innovations into the Mage library and explore the development of a realtime gesture recognition tool

    Cross-Lingual Voice Conversion with Non-Parallel Data

    Get PDF
    In this project a Phonetic Posteriorgram (PPG) based Voice Conversion system is implemented. The main goal is to perform and evaluate conversions of singing voice. The cross-gender and cross-lingual scenarios are considered. Additionally, the use of spectral envelope based MFCC and pseudo-singing dataset for ASR training are proposed in order to improve the performance of the system in the singing context

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    Real-Time Subtitle Generator for Sinhala Speech

    Get PDF
    In today’s digital era, the significance of speech recognition technology cannot be overstated as it plays a pivotal role in enabling human-computer interaction and supporting various applications. This paper focuses on the development of a real-time subtitle generator for Sinhala speech using speech recognition techniques. The CMUSphinx toolkit, an open-source toolkit based on the Hidden Markov Model (HMM), is employed for the implementation of the application. Mel-frequency cepstral coefficients (MFCC) are utilized for feature extraction from the given ’wav’ format recordings. The paper places significant emphasis on the importance of a real-time subtitle generator for Sinhala speech and explores the existing literature in the field. It outlines the objectives of the research and discusses the achieved outcomes. By fine-tuning hyperparameters to enhance the recognition accuracy of the system, impressive results of 88.28% training accuracy and 11.72% Word Error Rate (WER) are attained. Thesignificance of this research is underscored by its methodological advancements, robust performance metrics, and the potential impact on facilitating seamless interactions and applications in the Sinhala speech domain. Keywords: Speech recognition, Real-time, Subtitle, CMUSphinx, Open source, Hidden Markov Model, Mel-frequency cepstral coefficients, ’wav’, Accuracy, Word Error Rat

    Subspace Gaussian Mixture Models for Language Identification and Dysarthric Speech Intelligibility Assessment

    Get PDF
    En esta Tesis se ha investigado la aplicaciĂłn de tĂ©cnicas de modelado de subespacios de mezclas de Gaussianas en dos problemas relacionados con las tecnologĂ­as del habla, como son la identificaciĂłn automĂĄtica de idioma (LID, por sus siglas en inglĂ©s) y la evaluaciĂłn automĂĄtica de inteligibilidad en el habla de personas con disartria. Una de las tĂ©cnicas mĂĄs importantes estudiadas es el anĂĄlisis factorial conjunto (JFA, por sus siglas en inglĂ©s). JFA es, en esencia, un modelo de mezclas de Gaussianas en el que la media de cada componente se expresa como una suma de factores de dimensiĂłn reducida, y donde cada factor representa una contribuciĂłn diferente a la señal de audio. Esta factorizaciĂłn nos permite compensar nuestros modelos frente a contribuciones indeseadas presentes en la señal, como la informaciĂłn de canal. JFA se ha investigado como clasficador y como extractor de parĂĄmetros. En esta Ășltima aproximaciĂłn se modela un solo factor que representa todas las contribuciones presentes en la señal. Los puntos en este subespacio se denominan i-Vectors. AsĂ­, un i-Vector es un vector de baja dimensiĂłn que representa una grabaciĂłn de audio. Los i-Vectors han resultado ser muy Ăștiles como vector de caracterĂ­sticas para representar señales en diferentes problemas relacionados con el aprendizaje de mĂĄquinas. En relaciĂłn al problema de LID, se han investigado dos sistemas diferentes de acuerdo al tipo de informaciĂłn extraĂ­da de la señal. En el primero, la señal se parametriza en vectores acĂșsticos con informaciĂłn espectral a corto plazo. En este caso, observamos mejoras de hasta un 50% con el sistema basado en i-Vectors respecto al sistema que utilizaba JFA como clasificador. Se comprobĂł que el subespacio de canal del modelo JFA tambiĂ©n contenĂ­a informaciĂłn del idioma, mientras que con los i-Vectors no se descarta ningĂșn tipo de informaciĂłn, y ademĂĄs, son Ăștiles para mitigar diferencias entre los datos de entrenamiento y de evaluaciĂłn. En la fase de clasificaciĂłn, los i-Vectors de cada idioma se modelaron con una distribuciĂłn Gaussiana en la que la matriz de covarianza era comĂșn para todos. Este mĂ©todo es simple y rĂĄpido, y no requiere de ningĂșn post-procesado de los i-Vectors. En el segundo sistema, se introdujo el uso de informaciĂłn prosĂłdica y formĂĄntica en un sistema de LID basado en i-Vectors. La precisiĂłn de Ă©ste estaba por debajo de la del sistema acĂșstico. Sin embargo, los dos sistemas son complementarios, y se obtuvo hasta un 20% de mejora con la fusiĂłn de los dos respecto al sistema acĂșstico solo. Tras los buenos resultados obtenidos para LID, y dado que, teĂłricamente, los i-Vectors capturan toda la informaciĂłn presente en la señal, decidimos usarlos para la evaluar de manera automĂĄtica la inteligibilidad en el habla de personas con disartria. Los logopedas estĂĄn muy interesados en esta tecnologĂ­a porque permitirĂ­a evaluar a sus pacientes de una manera objetiva y consistente. En este caso, los i-Vectors se obtuvieron a partir de informaciĂłn espectral a corto plazo de la señal, y la inteligibilidad se calculĂł a partir de los i-Vectors obtenidos para un conjunto de palabras dichas por el locutor evaluado. Comprobamos que los resultados eran mucho mejores si en el entrenamiento del sistema se incorporaban datos de la persona que iba a ser evaluada. No obstante, esta limitaciĂłn podrĂ­a aliviarse utilizando una mayor cantidad de datos para entrenar el sistema.In this Thesis, we investigated how to effciently apply subspace Gaussian mixture modeling techniques onto two speech technology problems, namely automatic spoken language identification (LID) and automatic intelligibility assessment of dysarthric speech. One of the most important of such techniques in this Thesis was joint factor analysis (JFA). JFA is essentially a Gaussian mixture model where the mean of the components is expressed as a sum of low-dimension factors that represent different contributions to the speech signal. This factorization makes it possible to compensate for undesired sources of variability, like the channel. JFA was investigated as final classiffer and as feature extractor. In the latter approach, a single subspace including all sources of variability is trained, and points in this subspace are known as i-Vectors. Thus, one i-Vector is defined as a low-dimension representation of a single utterance, and they are a very powerful feature for different machine learning problems. We have investigated two different LID systems according to the type of features extracted from speech. First, we extracted acoustic features representing short-time spectral information. In this case, we observed relative improvements with i-Vectors with respect to JFA of up to 50%. We realized that the channel subspace in a JFA model also contains language information whereas i-Vectors do not discard any language information, and moreover, they help to reduce mismatches between training and testing data. For classification, we modeled the i-Vectors of each language with a Gaussian distribution with covariance matrix shared among languages. This method is simple and fast, and it worked well without any post-processing. Second, we introduced the use of prosodic and formant information with the i-Vectors system. The performance was below the acoustic system but both were found to be complementary and we obtained up to a 20% relative improvement with the fusion with respect to the acoustic system alone. Given the success in LID and the fact that i-Vectors capture all the information that is present in the data, we decided to use i-Vectors for other tasks, specifically, the assessment of speech intelligibility in speakers with different types of dysarthria. Speech therapists are very interested in this technology because it would allow them to objectively and consistently rate the intelligibility of their patients. In this case, the input features were extracted from short-term spectral information, and the intelligibility was assessed from the i-Vectors calculated from a set of words uttered by the tested speaker. We found that the performance was clearly much better if we had available data for training of the person that would use the application. We think that this limitation could be relaxed if we had larger databases for training. However, the recording process is not easy for people with disabilities, and it is difficult to obtain large datasets of dysarthric speakers open to the research community. Finally, the same system architecture for intelligibility assessment based on i-Vectors was used for predicting the accuracy that an automatic speech recognizer (ASR) system would obtain with dysarthric speakers. The only difference between both was the ground truth label set used for training. Predicting the performance response of an ASR system would increase the confidence of speech therapists in these systems and would diminish health related costs. The results were not as satisfactory as in the previous case, probably because an ASR is a complex system whose accuracy can be very difficult to be predicted only with acoustic information. Nonetheless, we think that we opened a door to an interesting research direction for the two problems

    Voice source characterization for prosodic and spectral manipulation

    Get PDF
    The objective of this dissertation is to study and develop techniques to decompose the speech signal into its two main components: voice source and vocal tract. Our main efforts are on the glottal pulse analysis and characterization. We want to explore the utility of this model in different areas of speech processing: speech synthesis, voice conversion or emotion detection among others. Thus, we will study different techniques for prosodic and spectral manipulation. One of our requirements is that the methods should be robust enough to work with the large databases typical of speech synthesis. We use a speech production model in which the glottal flow produced by the vibrating vocal folds goes through the vocal (and nasal) tract cavities and its radiated by the lips. Removing the effect of the vocal tract from the speech signal to obtain the glottal pulse is known as inverse filtering. We use a parametric model fo the glottal pulse directly in the source-filter decomposition phase. In order to validate the accuracy of the parametrization algorithm, we designed a synthetic corpus using LF glottal parameters reported in the literature, complemented with our own results from the vowel database. The results show that our method gives satisfactory results in a wide range of glottal configurations and at different levels of SNR. Our method using the whitened residual compared favorably to this reference, achieving high quality ratings (Good-Excellent). Our full parametrized system scored lower than the other two ranking in third place, but still higher than the acceptance threshold (Fair-Good). Next we proposed two methods for prosody modification, one for each of the residual representations explained above. The first method used our full parametrization system and frame interpolation to perform the desired changes in pitch and duration. The second method used resampling on the residual waveform and a frame selection technique to generate a new sequence of frames to be synthesized. The results showed that both methods are rated similarly (Fair-Good) and that more work is needed in order to achieve quality levels similar to the reference methods. As part of this dissertation, we have studied the application of our models in three different areas: voice conversion, voice quality analysis and emotion recognition. We have included our speech production model in a reference voice conversion system, to evaluate the impact of our parametrization in this task. The results showed that the evaluators preferred our method over the original one, rating it with a higher score in the MOS scale. To study the voice quality, we recorded a small database consisting of isolated, sustained Spanish vowels in four different phonations (modal, rough, creaky and falsetto) and were later also used in our study of voice quality. Comparing the results with those reported in the literature, we found them to generally agree with previous findings. Some differences existed, but they could be attributed to the difficulties in comparing voice qualities produced by different speakers. At the same time we conducted experiments in the field of voice quality identification, with very good results. We have also evaluated the performance of an automatic emotion classifier based on GMM using glottal measures. For each emotion, we have trained an specific model using different features, comparing our parametrization to a baseline system using spectral and prosodic characteristics. The results of the test were very satisfactory, showing a relative error reduction of more than 20% with respect to the baseline system. The accuracy of the different emotions detection was also high, improving the results of previously reported works using the same database. Overall, we can conclude that the glottal source parameters extracted using our algorithm have a positive impact in the field of automatic emotion classification

    Efficient speaker recognition for mobile devices

    Get PDF

    Articulatory-WaveNet: Deep Autoregressive Model for Acoustic-to-Articulatory Inversion

    Get PDF
    Acoustic-to-Articulatory Inversion, the estimation of articulatory kinematics from speech, is an important problem which has received significant attention in recent years. Estimated articulatory movements from such models can be used for many applications, including speech synthesis, automatic speech recognition, and facial kinematics for talking-head animation devices. Knowledge about the position of the articulators can also be extremely useful in speech therapy systems and Computer-Aided Language Learning (CALL) and Computer-Aided Pronunciation Training (CAPT) systems for second language learners. Acoustic-to-Articulatory Inversion is a challenging problem due to the complexity of articulation patterns and significant inter-speaker differences. This is even more challenging when applied to non-native speakers without any kinematic training data. This dissertation attempts to address these problems through the development of up-graded architectures for Articulatory Inversion. The proposed Articulatory-WaveNet architecture is based on a dilated causal convolutional layer structure that improves the Acoustic-to-Articulatory Inversion estimated results for both speaker-dependent and speaker-independent scenarios. The system has been evaluated on the ElectroMagnetic Articulography corpus of Mandarin Accented English (EMA-MAE) corpus, consisting of 39 speakers including both native English speakers and Mandarin accented English speakers. Results show that Articulatory-WaveNet improves the performance of the speaker-dependent and speaker-independent Acoustic-to-Articulatory Inversion systems significantly compared to the previously reported results
    • 

    corecore