804 research outputs found

    Holistic Vocabulary Independent Spoken Term Detection

    Get PDF
    Within this thesis, we aim at designing a loosely coupled holistic system for Spoken Term Detection (STD) on heterogeneous German broadcast data in selected application scenarios. Starting from STD on the 1-best output of a word-based speech recognizer, we study the performance of several subword units for vocabulary independent STD on a linguistically and acoustically challenging German corpus. We explore the typical error sources in subword STD, and find that they differ from the error sources in word-based speech search. We select, extend and combine a set of state-of-the-art methods for error compensation in STD in order to explicitly merge the corresponding STD error spaces through anchor-based approximate lattice retrieval. Novel methods for STD result verification are proposed in order to increase retrieval precision by exploiting external knowledge at search time. Error-compensating methods for STD typically suffer from high response times on large scale databases, and we propose scalable approaches suitable for large corpora. Highest STD accuracy is obtained by combining anchor-based approximate retrieval from both syllable lattice ASR and syllabified word ASR into a hybrid STD system, and pruning the result list using external knowledge with hybrid contextual and anti-query verification.Die vorliegende Arbeit beschreibt ein lose gekoppeltes, ganzheitliches System zur Sprachsuche auf heterogenenen deutschen Sprachdaten in unterschiedlichen Anwendungsszenarien. Ausgehend von einer wortbasierten Sprachsuche auf dem Transkript eines aktuellen Wort-Erkenners werden zunächst unterschiedliche Subwort-Einheiten für die vokabularunabhängige Sprachsuche auf deutschen Daten untersucht. Auf dieser Basis werden die typischen Fehlerquellen in der Subwort-basierten Sprachsuche analysiert. Diese Fehlerquellen unterscheiden sich vom Fall der klassichen Suche im Worttranskript und müssen explizit adressiert werden. Die explizite Kompensation der unterschiedlichen Fehlerquellen erfolgt durch einen neuartigen hybriden Ansatz zur effizienten Ankerbasierten unscharfen Wortgraph-Suche. Darüber hinaus werden neuartige Methoden zur Verifikation von Suchergebnissen vorgestellt, die zur Suchzeit verfügbares externes Wissen einbeziehen. Alle vorgestellten Verfahren werden auf einem umfangreichen Satz von deutschen Fernsehdaten mit Fokus auf ausgewählte, repräsentative Einsatzszenarien evaluiert. Da Methoden zur Fehlerkompensation in der Sprachsuchforschung typischerweise zu hohen Laufzeiten bei der Suche in großen Archiven führen, werden insbesondere auch Szenarien mit sehr großen Datenmengen betrachtet. Die höchste Suchleistung für Archive mittlerer Größe wird durch eine unscharfe und Anker-basierte Suche auf einem hybriden Index aus Silben-Wortgraphen und silbifizierter Wort-Erkennung erreicht, bei der die Suchergebnisse mit hybrider Verifikation bereinigt werden

    A summary of the 2012 JHU CLSP Workshop on Zero Resource Speech Technologies and Models of Early Language Acquisition

    Get PDF
    We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding zero resource (unsupervised) speech technologies and related models of early language acquisition. Centered around the tasks of phonetic and lexical discovery, we consider unified evaluation metrics, present two new approaches for improving speaker independence in the absence of supervision, and evaluate the application of Bayesian word segmentation algorithms to automatic subword unit tokenizations. Finally, we present two strategies for integrating zero resource techniques into supervised settings, demonstrating the potential of unsupervised methods to improve mainstream technologies.5 page(s

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    Towards multi-domain speech understanding with flexible and dynamic vocabulary

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 201-208).In developing telephone-based conversational systems, we foresee future systems capable of supporting multiple domains and flexible vocabulary. Users can pursue several topics of interest within a single telephone call, and the system is able to switch transparently among domains within a single dialog. This system is able to detect the presence of any out-of-vocabulary (OOV) words, and automatically hypothesizes each of their pronunciation, spelling and meaning. These can be confirmed with the user and the new words are subsequently incorporated into the recognizer lexicon for future use. This thesis will describe our work towards realizing such a vision, using a multi-stage architecture. Our work is focused on organizing the application of linguistic constraints in order to accommodate multiple domain topics and dynamic vocabulary at the spoken input. The philosophy is to exclusively apply below word-level linguistic knowledge at the initial stage. Such knowledge is domain-independent and general to all of the English language. Hence, this is broad enough to support any unknown words that may appear at the input, as well as input from several topic domains. At the same time, the initial pass narrows the search space for the next stage, where domain-specific knowledge that resides at the word-level or above is applied. In the second stage, we envision several parallel recognizers, each with higher order language models tailored specifically to its domain. A final decision algorithm selects a final hypothesis from the set of parallel recognizers.(cont.) Part of our contribution is the development of a novel first stage which attempts to maximize linguistic constraints, using only below word-level information. The goals are to prevent sequences of unknown words from being pruned away prematurely while maintaining performance on in-vocabulary items, as well as reducing the search space for later stages. Our solution coordinates the application of various subword level knowledge sources. The recognizer lexicon is implemented with an inventory of linguistically motivated units called morphs, which are syllables augmented with spelling and word position. This first stage is designed to output a phonetic network so that we are not committed to the initial hypotheses. This adds robustness, as later stages can propose words directly from phones. To maximize performance on the first stage, much of our focus has centered on the integration of a set of hierarchical sublexical models into this first pass. To do this, we utilize the ANGIE framework which supports a trainable context-free grammar, and is designed to acquire subword-level and phonological information statistically. Its models can generalize knowledge about word structure, learned from in-vocabulary data, to previously unseen words. We explore methods for collapsing the ANGIE models into a finite-state transducer (FST) representation which enables these complex models to be efficiently integrated into recognition. The ANGIE-FST needs to encapsulate the hierarchical knowledge of ANGIE and replicate ANGIE's ability to support previously unobserved phonetic sequences ...by Grace Chung.Ph.D

    Subword lexical modelling for speech recognition

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 155-160).by Raymond Lau.Ph.D

    Multi-transputer based isolated word speech recognition system.

    Get PDF
    by Francis Cho-yiu Chik.Thesis (M.Phil.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 129-135).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Automatic speech recognition and its applications --- p.1Chapter 1.1.1 --- Artificial Neural Network (ANN) approach --- p.3Chapter 1.2 --- Motivation --- p.5Chapter 1.3 --- Background --- p.6Chapter 1.3.1 --- Speech recognition --- p.6Chapter 1.3.2 --- Parallel processing --- p.7Chapter 1.3.3 --- Parallel architectures --- p.10Chapter 1.3.4 --- Transputer --- p.12Chapter 1.4 --- Thesis outline --- p.13Chapter 2 --- Speech Signal Pre-processing --- p.14Chapter 2.1 --- Determine useful signal --- p.14Chapter 2.1.1 --- End point detection using energy --- p.15Chapter 2.1.2 --- End point detection enhancement using zero crossing rate --- p.18Chapter 2.2 --- Pre-emphasis filter --- p.19Chapter 2.3 --- Feature extraction --- p.20Chapter 2.3.1 --- Filter-bank spectrum analysis model --- p.22Chapter 2.3.2 --- Linear Predictive Coding (LPC) coefficients --- p.25Chapter 2.3.3 --- Cepstral coefficients --- p.27Chapter 2.3.4 --- Zero crossing rate and energy --- p.27Chapter 2.3.5 --- Pitch (fundamental frequency) detection --- p.28Chapter 2.4 --- Discussions --- p.30Chapter 3 --- Speech Recognition Methods --- p.32Chapter 3.1 --- Template matching using Dynamic Time Warping (DTW) --- p.32Chapter 3.2 --- Hidden Markov Model (HMM) --- p.37Chapter 3.2.1 --- Vector Quantization (VQ) --- p.38Chapter 3.2.2 --- Description of a discrete HMM --- p.41Chapter 3.2.3 --- Probability evaluation --- p.42Chapter 3.2.4 --- Estimation technique for model parameters --- p.46Chapter 3.2.5 --- State sequence for the observation sequence --- p.48Chapter 3.3 --- 2-dimensional Hidden Markov Model (2dHMM) --- p.49Chapter 3.3.1 --- Calculation for a 2dHMM --- p.50Chapter 3.4 --- Discussions --- p.56Chapter 4 --- Implementation --- p.59Chapter 4.1 --- Transputer based multiprocessor system --- p.59Chapter 4.1.1 --- Transputer Development System (TDS) --- p.60Chapter 4.1.2 --- System architecture --- p.61Chapter 4.1.3 --- Transtech TMB16 mother board --- p.62Chapter 4.1.4 --- Farming technique --- p.64Chapter 4.2 --- Farming technique on extracting spectral amplitude feature --- p.68Chapter 4.3 --- Feature extraction for LPC --- p.73Chapter 4.4 --- DTW based recognition --- p.77Chapter 4.4.1 --- Feature extraction --- p.77Chapter 4.4.2 --- Training and matching --- p.78Chapter 4.5 --- HMM based recognition --- p.80Chapter 4.5.1 --- Feature extraction --- p.80Chapter 4.5.2 --- Model training and matching --- p.81Chapter 4.6 --- 2dHMM based recognition --- p.83Chapter 4.6.1 --- Feature extraction --- p.83Chapter 4.6.2 --- Training --- p.83Chapter 4.6.3 --- Recognition --- p.87Chapter 4.7 --- Training convergence in HMM and 2dHMM --- p.88Chapter 4.8 --- Discussions --- p.91Chapter 5 --- Experimental Results --- p.92Chapter 5.1 --- "Comparison of DTW, HMM and 2dHMM" --- p.93Chapter 5.2 --- Comparison between HMM and 2dHMM --- p.98Chapter 5.2.1 --- Recognition test on 20 English words --- p.98Chapter 5.2.2 --- Recognition test on 10 Cantonese syllables --- p.102Chapter 5.3 --- Recognition test on 80 Cantonese syllables --- p.113Chapter 5.4 --- Speed matching --- p.118Chapter 5.5 --- Computational performance --- p.119Chapter 5.5.1 --- Training performance --- p.119Chapter 5.5.2 --- Recognition performance --- p.120Chapter 6 --- Discussions and Conclusions --- p.126Bibliography --- p.129Chapter A --- An ANN Model for Speech Recognition --- p.136Chapter B --- A Speech Signal Represented in Fequency Domain (Spectrogram) --- p.138Chapter C --- Dynamic Programming --- p.144Chapter D --- Markov Process --- p.145Chapter E --- Maximum Likelihood (ML) --- p.146Chapter F --- Multiple Training --- p.149Chapter F.1 --- HMM --- p.150Chapter F.2 --- 2dHMM --- p.150Chapter G --- IMS T800 Transputer --- p.152Chapter G.1 --- IMS T800 architecture --- p.152Chapter G.2 --- Instruction encoding --- p.153Chapter G.3 --- Floating point instructions --- p.155Chapter G.4 --- Optimizing use of the stack --- p.157Chapter G.5 --- Concurrent operation of FPU and CPU --- p.15
    corecore