64 research outputs found

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    Methods for pronunciation assessment in computer aided language learning

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 149-176).Learning a foreign language is a challenging endeavor that entails acquiring a wide range of new knowledge including words, grammar, gestures, sounds, etc. Mastering these skills all require extensive practice by the learner and opportunities may not always be available. Computer Aided Language Learning (CALL) systems provide non-threatening environments where foreign language skills can be practiced where ever and whenever a student desires. These systems often have several technologies to identify the different types of errors made by a student. This thesis focuses on the problem of identifying mispronunciations made by a foreign language student using a CALL system. We make several assumptions about the nature of the learning activity: it takes place using a dialogue system, it is a task- or game-oriented activity, the student should not be interrupted by the pronunciation feedback system, and that the goal of the feedback system is to identify severe mispronunciations with high reliability. Detecting mispronunciations requires a corpus of speech with human judgements of pronunciation quality. Typical approaches to collecting such a corpus use an expert phonetician to both phonetically transcribe and assign judgements of quality to each phone in a corpus. This is time consuming and expensive. It also places an extra burden on the transcriber. We describe a novel method for obtaining phone level judgements of pronunciation quality by utilizing non-expert, crowd-sourced, word level judgements of pronunciation. Foreign language learners typically exhibit high variation and pronunciation shapes distinct from native speakers that make analysis for mispronunciation difficult. We detail a simple, but effective method for transforming the vowel space of non-native speakers to make mispronunciation detection more robust and accurate. We show that this transformation not only enhances performance on a simple classification task, but also results in distributions that can be better exploited for mispronunciation detection. This transformation of the vowel is exploited to train a mispronunciation detector using a variety of features derived from acoustic model scores and vowel class distributions. We confirm that the transformation technique results in a more robust and accurate identification of mispronunciations than traditional acoustic models.by Mitchell A. Peabody.Ph.D

    Enrichment of Oesophageal Speech: Voice Conversion with Duration-Matched Synthetic Speech as Target

    Get PDF
    Pathological speech such as Oesophageal Speech (OS) is difficult to understand due to the presence of undesired artefacts and lack of normal healthy speech characteristics. Modern speech technologies and machine learning enable us to transform pathological speech to improve intelligibility and quality. We have used a neural network based voice conversion method with the aim of improving the intelligibility and reducing the listening effort (LE) of four OS speakers of varying speaking proficiency. The novelty of this method is the use of synthetic speech matched in duration with the source OS as the target, instead of parallel aligned healthy speech. We evaluated the converted samples from this system using a collection of Automatic Speech Recognition systems (ASR), an objective intelligibility metric (STOI) and a subjective test. ASR evaluation shows that the proposed system had significantly better word recognition accuracy compared to unprocessed OS, and baseline systems which used aligned healthy speech as the target. There was an improvement of at least 15% on STOI scores indicating a higher intelligibility for the proposed system compared to unprocessed OS, and a higher target similarity in the proposed system compared to baseline systems. The subjective test reveals a significant preference for the proposed system compared to unprocessed OS for all OS speakers, except one who was the least proficient OS speaker in the data set.This project was supported by funding from the European Union’s H2020 research and innovation programme under the MSCA GA 675324 (the ENRICH network: www.enrich-etn.eu (accessed on 25 June 2021)), and the Basque Government (PIBA_2018_1_0035 and IT355-19)

    Feature extraction and event detection for automatic speech recognition

    Get PDF

    Application of automatic speech recognition technologies to singing

    Get PDF
    The research field of Music Information Retrieval is concerned with the automatic analysis of musical characteristics. One aspect that has not received much attention so far is the automatic analysis of sung lyrics. On the other hand, the field of Automatic Speech Recognition has produced many methods for the automatic analysis of speech, but those have rarely been employed for singing. This thesis analyzes the feasibility of applying various speech recognition methods to singing, and suggests adaptations. In addition, the routes to practical applications for these systems are described. Five tasks are considered: Phoneme recognition, language identification, keyword spotting, lyrics-to-audio alignment, and retrieval of lyrics from sung queries. The main bottleneck in almost all of these tasks lies in the recognition of phonemes from sung audio. Conventional models trained on speech do not perform well when applied to singing. Training models on singing is difficult due to a lack of annotated data. This thesis offers two approaches for generating such data sets. For the first one, speech recordings are made more “song-like”. In the second approach, textual lyrics are automatically aligned to an existing singing data set. In both cases, these new data sets are then used for training new acoustic models, offering considerable improvements over models trained on speech. Building on these improved acoustic models, speech recognition algorithms for the individual tasks were adapted to singing by either improving their robustness to the differing characteristics of singing, or by exploiting the specific features of singing performances. Examples of improving robustness include the use of keyword-filler HMMs for keyword spotting, an i-vector approach for language identification, and a method for alignment and lyrics retrieval that allows highly varying durations. Features of singing are utilized in various ways: In an approach for language identification that is well-suited for long recordings; in a method for keyword spotting based on phoneme durations in singing; and in an algorithm for alignment and retrieval that exploits known phoneme confusions in singing.Das Gebiet des Music Information Retrieval befasst sich mit der automatischen Analyse von musikalischen Charakteristika. Ein Aspekt, der bisher kaum erforscht wurde, ist dabei der gesungene Text. Auf der anderen Seite werden in der automatischen Spracherkennung viele Methoden fĂŒr die automatische Analyse von Sprache entwickelt, jedoch selten fĂŒr Gesang. Die vorliegende Arbeit untersucht die Anwendung von Methoden aus der Spracherkennung auf Gesang und beschreibt mögliche Anpassungen. Zudem werden Wege zur praktischen Anwendung dieser AnsĂ€tze aufgezeigt. FĂŒnf Themen werden dabei betrachtet: Phonemerkennung, Sprachenidentifikation, Schlagwortsuche, Text-zu-Gesangs-Alignment und Suche von Texten anhand von gesungenen Anfragen. Das grĂ¶ĂŸte Hindernis bei fast allen dieser Themen ist die Erkennung von Phonemen aus Gesangsaufnahmen. Herkömmliche, auf Sprache trainierte Modelle, bieten keine guten Ergebnisse fĂŒr Gesang. Das Trainieren von Modellen auf Gesang ist schwierig, da kaum annotierte Daten verfĂŒgbar sind. Diese Arbeit zeigt zwei AnsĂ€tze auf, um solche Daten zu generieren. FĂŒr den ersten wurden Sprachaufnahmen kĂŒnstlich gesangsĂ€hnlicher gemacht. FĂŒr den zweiten wurden Texte automatisch zu einem vorhandenen Gesangsdatensatz zugeordnet. Die neuen DatensĂ€tze wurden zum Trainieren neuer Modelle genutzt, welche deutliche Verbesserungen gegenĂŒber sprachbasierten Modellen bieten. Auf diesen verbesserten akustischen Modellen aufbauend wurden Algorithmen aus der Spracherkennung fĂŒr die verschiedenen Aufgaben angepasst, entweder durch das Verbessern der Robustheit gegenĂŒber Gesangscharakteristika oder durch das Ausnutzen von hilfreichen Besonderheiten von Gesang. Beispiele fĂŒr die verbesserte Robustheit sind der Einsatz von Keyword-Filler-HMMs fĂŒr die Schlagwortsuche, ein i-Vector-Ansatz fĂŒr die Sprachenidentifikation sowie eine Methode fĂŒr das Alignment und die Textsuche, die stark schwankende Phonemdauern nicht bestraft. Die Besonderheiten von Gesang werden auf verschiedene Weisen genutzt: So z.B. in einem Ansatz fĂŒr die Sprachenidentifikation, der lange Aufnahmen benötigt; in einer Methode fĂŒr die Schlagwortsuche, die bekannte Phonemdauern in Gesang mit einbezieht; und in einem Algorithmus fĂŒr das Alignment und die Textsuche, der bekannte Phonemkonfusionen verwertet

    Robust speech recognition with spectrogram factorisation

    Get PDF
    Communication by speech is intrinsic for humans. Since the breakthrough of mobile devices and wireless communication, digital transmission of speech has become ubiquitous. Similarly distribution and storage of audio and video data has increased rapidly. However, despite being technically capable to record and process audio signals, only a fraction of digital systems and services are actually able to work with spoken input, that is, to operate on the lexical content of speech. One persistent obstacle for practical deployment of automatic speech recognition systems is inadequate robustness against noise and other interferences, which regularly corrupt signals recorded in real-world environments. Speech and diverse noises are both complex signals, which are not trivially separable. Despite decades of research and a multitude of different approaches, the problem has not been solved to a sufficient extent. Especially the mathematically ill-posed problem of separating multiple sources from a single-channel input requires advanced models and algorithms to be solvable. One promising path is using a composite model of long-context atoms to represent a mixture of non-stationary sources based on their spectro-temporal behaviour. Algorithms derived from the family of non-negative matrix factorisations have been applied to such problems to separate and recognise individual sources like speech. This thesis describes a set of tools developed for non-negative modelling of audio spectrograms, especially involving speech and real-world noise sources. An overview is provided to the complete framework starting from model and feature definitions, advancing to factorisation algorithms, and finally describing different routes for separation, enhancement, and recognition tasks. Current issues and their potential solutions are discussed both theoretically and from a practical point of view. The included publications describe factorisation-based recognition systems, which have been evaluated on publicly available speech corpora in order to determine the efficiency of various separation and recognition algorithms. Several variants and system combinations that have been proposed in literature are also discussed. The work covers a broad span of factorisation-based system components, which together aim at providing a practically viable solution to robust processing and recognition of speech in everyday situations

    Brains in dialogue: investigating accommodation in live conversational speech for both speech and EEG data.

    Get PDF
    One of the phenomena to emerge from the study of human spoken interaction is accommodation or the tendency of an individual’s speech patterning to shift relative to their interlocutor. Whilst the experimental approach to the detection of accommodation has a solid background in the literature, it tends to treat the process of accommodation as a black box. The general approach for the detection of accommodation in speech has been to record the speech of a given speaker prior to interaction and then again after an interaction. These two measures are then compared to the speech of the interlocutor to test for similarity. If the speech sample following interaction is more similar then we can say that accommodation has taken place. Part of the goal of this thesis is to evaluate whether it is possible to look into the black box of speech accommodation and measure it ‘in situ’. Given that speech accommodation appears to take place as a result of interaction, it would be reasonable to assume that a similar effect might be observable in other areas contributing to a communicative interaction. The notion of an interacting dyad developing an increased degree of alignment over the course of an interaction has been proposed by psychologists. Theories have posited that alignment occurs at multiple levels of engagement, from broad levels of syntactic alignment down to phonetic levels of alignment. The use of speech accommodation as an anchor with which to track the evolution of change in the brain signal may prove to be one approach to investigating the claims made by these theories. The second part of this thesis aims to evaluate whether the phenomenon of accommodation is also observable in the form of electrical signals generated by the brain, measured using Electroencephalography (EEG). However, evaluating the change in the EEG signal over a continuous stretch of time is a hurdle that will need to be tackled. Traditionally, EEG methodologies involve averaging the signal over many repetitions of the same task. This is not a viable option when investigating communicative interaction. Clearly the evaluation of accommodation in both speech and brain activity, especially for continuously unfolding phenomena such as accommodation, is a non-trivial task. In order to tackle this, an approach from speech recognition and computer science has been employed. The implementation of Hidden Markov Models (HMM) has been used to develop speech recognition systems and has also been used to detect fraudulent attempts to imitate the voice of others. Given that HMMs have successfully been employed to detect the imitation of another person’s speech they are a good candidate for being able to detect the movement towards or away from an interlocutor during the course of an interaction. In addition, the use of HMMs is non-domain specific, they can be used to evaluate any time-variant signal. This adaptability of the approach allows for it to also be applied to EEG signals in conjunction with the speech signal. Two experiments are presented here. The behavioural experiment aims to evaluate the ability of a HMM based approach to detect accommodation by engaging pairs of female, Glaswegian speakers in the collaborative DiapixUK task. The results of their interactions are then evaluated from both a traditional phonetic standpoint, by assessing changes in Voice Onset Time (VOT) of stop consonants, formant values of vowels and speech rate over the course of an interaction and using the HMM based approach. The neural experiment looks to evaluate the ability of a HMM based approach to detect accommodation in both the speech signal and in brain activity. The same experiment that was performed in Experiment 1 was repeated, with the addition of EEG caps to both participants. The data was then evaluated using the HMM based approach. This thesis presents findings that suggest a function for speech accommodation that has not been explored in the past. This is done through the use of a novel, HMM based, holistic acoustic-phonetic measurement tool which produced consistent measures across both experiments. Further to this, the measurement tool is shown to have possible extended uses for EEG data. The use of the presented HMM based, holistic-acoustic measurement tool presents a novel contribution to the field for the measurement and evaluation of accommodation

    Scoring heterogeneous speaker vectors using nonlinear transformations and tied PLDa models

    Get PDF
    Most current state-of-the-art text-independent speaker recognition systems are based on i-vectors, and on probabilistic linear discriminant analysis (PLDA). PLDA assumes that the i-vectors of a trial are homogeneous, i.e., that they have been extracted by the same system. In other words, the enrollment and test i-vectors belong to the same class. However, it is sometimes important to score trials including “heterogeneous” i-vectors, for instance, enrollment i-vectors extracted by an old system, and test i-vectors extracted by a newer, more accurate, system. In this paper, we introduce a PLDA model that is able to score heterogeneous i-vectors independent of their extraction approach, dimensions, and any other characteristics that make a set of i-vectors of the same speaker belong to different classes. The new model, which will be referred to as nonlinear tied-PLDA (NL-Tied-PLDA), is obtained by a generalization of our recently proposed nonlinear PLDA approach, which jointly estimates the PLDA parameters and the parameters of a nonlinear transformation of the i-vectors. The generalization consists of estimating a class-dependent nonlinear transformation of the i-vectors, with the constraint that the transformed i-vectors of the same speaker share the same speaker factor. The resulting model is flexible and accurate, as assessed by the results of a set of experiments performed on the extended core NIST SRE 2012 evaluation. In particular, NL-Tied-PLDA provides better results on heterogeneous trials with respect to the corresponding homogeneous trials scored by the old system, and, in some configurations, it also reaches the accuracy of the new system. Similar results were obtained on the female-extended core NIST SRE 2010 telephone condition
    • 

    corecore