2,396 research outputs found

    Whole Word Phonetic Displays for Speech Articulation Training

    Get PDF
    The main objective of this dissertation is to investigate and develop speech recognition technologies for speech training for people with hearing impairments. During the course of this work, a computer aided speech training system for articulation speech training was also designed and implemented. The speech training system places emphasis on displays to improve children\u27s pronunciation of isolated Consonant-Vowel-Consonant (CVC) words, with displays at both the phonetic level and whole word level. This dissertation presents two hybrid methods for combining Hidden Markov Models (HMMs) and Neural Networks (NNs) for speech recognition. The first method uses NN outputs as posterior probability estimators for HMMs. The second method uses NNs to transform the original speech features to normalized features with reduced correlation. Based on experimental testing, both of the hybrid methods give higher accuracy than standard HMM methods. The second method, using the NN to create normalized features, outperforms the first method in terms of accuracy. Several graphical displays were developed to provide real time visual feedback to users, to help them to improve and correct their pronunciations

    Design of reservoir computing systems for the recognition of noise corrupted speech and handwriting

    Get PDF

    Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech

    Get PDF
    We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as Statement, Question, Backchannel, Agreement, Disagreement, and Apology. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.Comment: 35 pages, 5 figures. Changes in copy editing (note title spelling changed

    Urban Air Pollution Forecasting Using Artificial Intelligence-Based Tools

    Get PDF

    ARTICULATORY INFORMATION FOR ROBUST SPEECH RECOGNITION

    Get PDF
    Current Automatic Speech Recognition (ASR) systems fail to perform nearly as good as human speech recognition performance due to their lack of robustness against speech variability and noise contamination. The goal of this dissertation is to investigate these critical robustness issues, put forth different ways to address them and finally present an ASR architecture based upon these robustness criteria. Acoustic variations adversely affect the performance of current phone-based ASR systems, in which speech is modeled as `beads-on-a-string', where the beads are the individual phone units. While phone units are distinctive in cognitive domain, they are varying in the physical domain and their variation occurs due to a combination of factors including speech style, speaking rate etc.; a phenomenon commonly known as `coarticulation'. Traditional ASR systems address such coarticulatory variations by using contextualized phone-units such as triphones. Articulatory phonology accounts for coarticulatory variations by modeling speech as a constellation of constricting actions known as articulatory gestures. In such a framework, speech variations such as coarticulation and lenition are accounted for by gestural overlap in time and gestural reduction in space. To realize a gesture-based ASR system, articulatory gestures have to be inferred from the acoustic signal. At the initial stage of this research an initial study was performed using synthetically generated speech to obtain a proof-of-concept that articulatory gestures can indeed be recognized from the speech signal. It was observed that having vocal tract constriction trajectories (TVs) as intermediate representation facilitated the gesture recognition task from the speech signal. Presently no natural speech database contains articulatory gesture annotation; hence an automated iterative time-warping architecture is proposed that can annotate any natural speech database with articulatory gestures and TVs. Two natural speech databases: X-ray microbeam and Aurora-2 were annotated, where the former was used to train a TV-estimator and the latter was used to train a Dynamic Bayesian Network (DBN) based ASR architecture. The DBN architecture used two sets of observation: (a) acoustic features in the form of mel-frequency cepstral coefficients (MFCCs) and (b) TVs (estimated from the acoustic speech signal). In this setup the articulatory gestures were modeled as hidden random variables, hence eliminating the necessity for explicit gesture recognition. Word recognition results using the DBN architecture indicate that articulatory representations not only can help to account for coarticulatory variations but can also significantly improve the noise robustness of ASR system

    Voice Recognition Systems for The Disabled Electorate: Critical Review on Architectures and Authentication Strategies

    Get PDF
    An inevitable factor that makes the concept of electronic voting irresistible is the fact that it offers the possibility of exceeding the manual voting process in terms of convenience, widespread participation, and consideration for People Living with Disabilities. The underlying voting technology and ballot design can determine the credibility of election results, influence how voters felt about their ability to exercise their right to vote, and their willingness to accept the legitimacy of electoral results. However, the adoption of e-voting systems has unveiled a new set of problems such as security threats, trust, and reliability of voting systems and the electoral process itself. This paper presents a critical literature review on concepts, architectures, and existing authentication strategies in voice recognition systems for the e-voting system for the disabled electorate. Consequently, in this paper, an intelligent yet secure scheme for electronic voting systems specifically for people living with disabilities is presented

    Speaker Dependent Voice Recognition with Word-Tense Association and Part-of-Speech Tagging

    Get PDF
    Extensive Research has been conducted on speech recognition and Speaker Recognition over the past few decades. Speaker recognition deals with identifying the speaker from multiple speakers and the ability to filter out the voice of an individual from the background for computational understanding. The more commonly researched method, speech recognition, deals only with computational linguistics. This thesis deals with speaker recognition and natural language processing. The most common speaker recognition systems are Text-Dependent and identify the speaker after a key word/phrase is uttered. This thesis presents Text-Independent Speaker recognition systems that incorporate the collaborative effort and research of noise-filtering, Speech Segmentation, Feature extraction, speaker verification and finally, Partial Language Modelling. The filtering process was accomplished using 4th order Butterworth Band-pass filters to dampen ambient noise outside normal speech frequencies of 300Hzto3000Hz. Speech segmentation utilizes Hamming windows to segment the speech, after which speech detection occurs by calculating the Short time Energy and Zero-crossing rates over a particular time period and identifying voiced from unvoiced using a threshold. Audio data collected from different people is run consecutively through a Speaker Training and Recognition Algorithm which uses neural networks to create a training group and target group for the recognition process. The output of the segmentation module is then processed by the neural network to recognize the speaker. Though not implemented here due to database and computational requirements, the last module suggests a new model for the Part of Speech tagging process that involves a combination of Artificial Neural Networks (ANN) and Hidden Markov Models (HMM) in a series configuration to achieve higher accuracy. This differs from existing research by diverging from the usual single model approach or the creation of hybrid ANN and HMM models

    CONNECTIONIST SPEECH RECOGNITION - A Hybrid Approach

    Get PDF
    • 

    corecore