241 research outputs found

    Standard Yorùbá context dependent tone identification using Multi-Class Support Vector Machine (MSVM)

    Get PDF
    Most state-of-the-art large vocabulary continuous speech recognition systems employ context dependent (CD) phone units, however, the CD phone units are not efficient in capturing long-term spectral dependencies of tone in most tone languages. The Standard Yorùbá (SY) is a language composed of syllable with tones and requires different method for the acoustic modeling. In this paper, a context dependent tone acoustic model was developed. Tone unit is assumed as syllables, amplitude magnified difference function (AMDF) was used to derive the utterance wide F contour, followed by automatic syllabification and tri-syllable forced alignment with speech phonetization alignment and syllabification SPPAS tool. For classification of the context dependent (CD) tone, slope and intercept of F values were extracted from each segmented unit. Supervised clustering scheme was utilized to partition CD tri-tone based on category and normalized based on some statistics to derive the acoustic feature vectors. Multi-class support vector machine (MSVM) was used for tri-tone training. From the experimental results, it was observed that the word recognition accuracy obtained from the MSVM tri-tone system based on dynamic programming tone embedded features was comparable with phone features. A best parameter tuning was obtained for 10-fold cross validation and overall accuracy was 97.5678%. In term of word error rate (WER), the MSVM CD tri-tone system outperforms the hidden Markov model tri-phone system with WER of 44.47%.Keywords: Syllabification, Standard Yorùbá, Context Dependent Tone, Tri-tone Recognitio

    Identification of Transient Speech Using Wavelet Transforms

    Get PDF
    It is generally believed that abrupt stimulus changes, which in speech may be time-varying frequency edges associated with consonants, transitions between consonants and vowels and transitions within vowels are critical to the perception of speech by humans and for speech recognition by machines. Noise affects speech transitions more than it affects quasi-steady-state speech. I believe that identifying and selectively amplifying speech transitions may enhance the intelligibility of speech in noisy conditions. The purpose of this study is to evaluate the use of wavelet transforms to identify speech transitions. Using wavelet transforms may be computationally efficient and allow for real-time applications. The discrete wavelet transform (DWT), stationary wavelet transform (SWT) and wavelet packets (WP) are evaluated. Wavelet analysis is combined with variable frame rate processing to improve the identification process. Variable frame rate can identify time segments when speech feature vectors are changing rapidly and when they are relatively stationary. Energy profiles for words, which show the energy in each node of a speech signal decomposed using wavelets, are used to identify nodes that include predominately transient information and nodes that include predominately quasi-steady-state information, and these are used to synthesize transient and quasi-steady-state speech components. These speech components are estimates of the tonal and nontonal speech components, which Yoo et al identified using time-varying band-pass filters. Comparison of spectra, a listening test and mean-squared-errors between the transient components synthesized using wavelets and Yoo's nontonal components indicated that wavelet packets identified the best estimates of Yoo's components. An algorithm that incorporates variable frame rate analysis into wavelet packet analysis is proposed. The development of this algorithm involves the processes of choosing a wavelet function and a decomposition level to be used. The algorithm itself has 4 steps: wavelet packet decomposition; classification of terminal nodes; incorporation of variable frame rate processing; synthesis of speech components. Combining wavelet analysis with variable frame rate analysis provides the best estimates of Yoo's speech components

    Segmentation of Speech and Humming in Vocal Input

    Get PDF
    Non-verbal vocal interaction (NVVI) is an interaction method in which sounds other than speech produced by a human are used, such as humming. NVVI complements traditional speech recognition systems with continuous control. In order to combine the two approaches (e.g. "volume up, mmm") it is necessary to perform a speech/NVVI segmentation of the input sound signal. This paper presents two novel methods of speech and humming segmentation. The first method is based on classification of MFCC and RMS parameters using a neural network (MFCC method), while the other method computes volume changes in the signal (IAC method). The two methods are compared using a corpus collected from 13 speakers. The results indicate that the MFCC method outperforms IAC in terms of accuracy, precision, and recall

    Incorporating pitch features for tone modeling in automatic recognition of Mandarin Chinese

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 53-56).Tone plays a fundamental role in Mandarin Chinese, as it plays a lexical role in determining the meanings of words in spoken Mandarin. For example, these two sentences ... (I like horses) and ... (I like to scold) differ only in the tone carried by the last syllable. Thus, the inclusion of tone-related information through analysis of pitch data should improve the performance of automatic speech recognition (ASR) systems on Mandarin Chinese. The focus of this thesis is to improve the performance of a non-tonal automatic speech recognition (ASR) system on a Mandarin Chinese corpus by implementing modifications to the system code to incorporate pitch features. We compile and format a Mandarin Chinese broadcast new corpus for use with the ASR system, and implement a pitch feature extraction algorithm. Additionally, we investigate two algorithms for incorporating pitch features in Mandarin Chinese speech recognition. Firstly, we build and test a baseline tonal ASR system with embedded tone modeling by concatenating the cepstral and pitch feature vectors for use as the input to our phonetic model (a Hidden Markov Model, or HMM). We find that our embedded tone modeling algorithm does improve performance on Mandarin Chinese, showing that including tonal information is in fact contributive for Mandarin Chinese speech recognition. Secondly, we implement and test the effectiveness of HMM-based multistream models.by Karen Lingyun Chu.M.Eng

    Linguistic constraints for large vocabulary speech recognition.

    Get PDF
    by Roger H.Y. Leung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 79-84).Abstracts in English and Chinese.ABSTRACT --- p.IKeywords: --- p.IACKNOWLEDGEMENTS --- p.IIITABLE OF CONTENTS: --- p.IVTable of Figures: --- p.VITable of Tables: --- p.VIIChapter CHAPTER 1 --- INTRODUCTION --- p.1Chapter 1.1 --- Languages in the World --- p.2Chapter 1.2 --- Problems of Chinese Speech Recognition --- p.3Chapter 1.2.1 --- Unlimited word size: --- p.3Chapter 1.2.2 --- Too many Homophones: --- p.3Chapter 1.2.3 --- Difference between spoken and written Chinese: --- p.3Chapter 1.2.4 --- Word Segmentation Problem: --- p.4Chapter 1.3 --- Different types of knowledge --- p.5Chapter 1.4 --- Chapter Conclusion --- p.6Chapter CHAPTER 2 --- FOUNDATIONS --- p.7Chapter 2.1 --- Chinese Phonology and Language Properties --- p.7Chapter 2.1.1 --- Basic Syllable Structure --- p.7Chapter 2.2 --- Acoustic Models --- p.9Chapter 2.2.1 --- Acoustic Unit --- p.9Chapter 2.2.2 --- Hidden Markov Model (HMM) --- p.9Chapter 2.3 --- Search Algorithm --- p.11Chapter 2.4 --- Statistical Language Models --- p.12Chapter 2.4.1 --- Context-Independent Language Model --- p.12Chapter 2.4.2 --- Word-Pair Language Model --- p.13Chapter 2.4.3 --- N-gram Language Model --- p.13Chapter 2.4.4 --- Backoff n-gram --- p.14Chapter 2.5 --- Smoothing for Language Model --- p.16Chapter CHAPTER 3 --- LEXICAL ACCESS --- p.18Chapter 3.1 --- Introduction --- p.18Chapter 3.2 --- Motivation: Phonological and lexical constraints --- p.20Chapter 3.3 --- Broad Classes Representation --- p.22Chapter 3.4 --- Broad Classes Statistic Measures --- p.25Chapter 3.5 --- Broad Classes Frequency Normalization --- p.26Chapter 3.6 --- Broad Classes Analysis --- p.27Chapter 3.7 --- Isolated Word Speech Recognizer using Broad Classes --- p.33Chapter 3.8 --- Chapter Conclusion --- p.34Chapter CHAPTER 4 --- CHARACTER AND WORD LANGUAGE MODEL --- p.35Chapter 4.1 --- Introduction --- p.35Chapter 4.2 --- Motivation --- p.36Chapter 4.2.1 --- Perplexity --- p.36Chapter 4.3 --- Call Home Mandarin corpus --- p.38Chapter 4.3.1 --- Acoustic Data --- p.38Chapter 4.3.2 --- Transcription Texts --- p.39Chapter 4.4 --- Methodology: Building Language Model --- p.41Chapter 4.5 --- Character Level Language Model --- p.45Chapter 4.6 --- Word Level Language Model --- p.48Chapter 4.7 --- Comparison of Character level and Word level Language Model --- p.50Chapter 4.8 --- Interpolated Language Model --- p.54Chapter 4.8.1 --- Methodology --- p.54Chapter 4.8.2 --- Experiment Results --- p.55Chapter 4.9 --- Chapter Conclusion --- p.56Chapter CHAPTER 5 --- N-GRAM SMOOTHING --- p.57Chapter 5.1 --- Introduction --- p.57Chapter 5.2 --- Motivation --- p.58Chapter 5.3 --- Mathematical Representation --- p.59Chapter 5.4 --- Methodology: Smoothing techniques --- p.61Chapter 5.4.1 --- Add-one Smoothing --- p.62Chapter 5.4.2 --- Witten-Bell Discounting --- p.64Chapter 5.4.3 --- Good Turing Discounting --- p.66Chapter 5.4.4 --- Absolute and Linear Discounting --- p.68Chapter 5.5 --- Comparison of Different Discount Methods --- p.70Chapter 5.6 --- Continuous Word Speech Recognizer --- p.71Chapter 5.6.1 --- Experiment Setup --- p.71Chapter 5.6.2 --- Experiment Results: --- p.72Chapter 5.7 --- Chapter Conclusion --- p.74Chapter CHAPTER 6 --- SUMMARY AND CONCLUSIONS --- p.75Chapter 6.1 --- Summary --- p.75Chapter 6.2 --- Further Work --- p.77Chapter 6.3 --- Conclusion --- p.78REFERENCE --- p.7

    Multi-transputer based isolated word speech recognition system.

    Get PDF
    by Francis Cho-yiu Chik.Thesis (M.Phil.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 129-135).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Automatic speech recognition and its applications --- p.1Chapter 1.1.1 --- Artificial Neural Network (ANN) approach --- p.3Chapter 1.2 --- Motivation --- p.5Chapter 1.3 --- Background --- p.6Chapter 1.3.1 --- Speech recognition --- p.6Chapter 1.3.2 --- Parallel processing --- p.7Chapter 1.3.3 --- Parallel architectures --- p.10Chapter 1.3.4 --- Transputer --- p.12Chapter 1.4 --- Thesis outline --- p.13Chapter 2 --- Speech Signal Pre-processing --- p.14Chapter 2.1 --- Determine useful signal --- p.14Chapter 2.1.1 --- End point detection using energy --- p.15Chapter 2.1.2 --- End point detection enhancement using zero crossing rate --- p.18Chapter 2.2 --- Pre-emphasis filter --- p.19Chapter 2.3 --- Feature extraction --- p.20Chapter 2.3.1 --- Filter-bank spectrum analysis model --- p.22Chapter 2.3.2 --- Linear Predictive Coding (LPC) coefficients --- p.25Chapter 2.3.3 --- Cepstral coefficients --- p.27Chapter 2.3.4 --- Zero crossing rate and energy --- p.27Chapter 2.3.5 --- Pitch (fundamental frequency) detection --- p.28Chapter 2.4 --- Discussions --- p.30Chapter 3 --- Speech Recognition Methods --- p.32Chapter 3.1 --- Template matching using Dynamic Time Warping (DTW) --- p.32Chapter 3.2 --- Hidden Markov Model (HMM) --- p.37Chapter 3.2.1 --- Vector Quantization (VQ) --- p.38Chapter 3.2.2 --- Description of a discrete HMM --- p.41Chapter 3.2.3 --- Probability evaluation --- p.42Chapter 3.2.4 --- Estimation technique for model parameters --- p.46Chapter 3.2.5 --- State sequence for the observation sequence --- p.48Chapter 3.3 --- 2-dimensional Hidden Markov Model (2dHMM) --- p.49Chapter 3.3.1 --- Calculation for a 2dHMM --- p.50Chapter 3.4 --- Discussions --- p.56Chapter 4 --- Implementation --- p.59Chapter 4.1 --- Transputer based multiprocessor system --- p.59Chapter 4.1.1 --- Transputer Development System (TDS) --- p.60Chapter 4.1.2 --- System architecture --- p.61Chapter 4.1.3 --- Transtech TMB16 mother board --- p.62Chapter 4.1.4 --- Farming technique --- p.64Chapter 4.2 --- Farming technique on extracting spectral amplitude feature --- p.68Chapter 4.3 --- Feature extraction for LPC --- p.73Chapter 4.4 --- DTW based recognition --- p.77Chapter 4.4.1 --- Feature extraction --- p.77Chapter 4.4.2 --- Training and matching --- p.78Chapter 4.5 --- HMM based recognition --- p.80Chapter 4.5.1 --- Feature extraction --- p.80Chapter 4.5.2 --- Model training and matching --- p.81Chapter 4.6 --- 2dHMM based recognition --- p.83Chapter 4.6.1 --- Feature extraction --- p.83Chapter 4.6.2 --- Training --- p.83Chapter 4.6.3 --- Recognition --- p.87Chapter 4.7 --- Training convergence in HMM and 2dHMM --- p.88Chapter 4.8 --- Discussions --- p.91Chapter 5 --- Experimental Results --- p.92Chapter 5.1 --- "Comparison of DTW, HMM and 2dHMM" --- p.93Chapter 5.2 --- Comparison between HMM and 2dHMM --- p.98Chapter 5.2.1 --- Recognition test on 20 English words --- p.98Chapter 5.2.2 --- Recognition test on 10 Cantonese syllables --- p.102Chapter 5.3 --- Recognition test on 80 Cantonese syllables --- p.113Chapter 5.4 --- Speed matching --- p.118Chapter 5.5 --- Computational performance --- p.119Chapter 5.5.1 --- Training performance --- p.119Chapter 5.5.2 --- Recognition performance --- p.120Chapter 6 --- Discussions and Conclusions --- p.126Bibliography --- p.129Chapter A --- An ANN Model for Speech Recognition --- p.136Chapter B --- A Speech Signal Represented in Fequency Domain (Spectrogram) --- p.138Chapter C --- Dynamic Programming --- p.144Chapter D --- Markov Process --- p.145Chapter E --- Maximum Likelihood (ML) --- p.146Chapter F --- Multiple Training --- p.149Chapter F.1 --- HMM --- p.150Chapter F.2 --- 2dHMM --- p.150Chapter G --- IMS T800 Transputer --- p.152Chapter G.1 --- IMS T800 architecture --- p.152Chapter G.2 --- Instruction encoding --- p.153Chapter G.3 --- Floating point instructions --- p.155Chapter G.4 --- Optimizing use of the stack --- p.157Chapter G.5 --- Concurrent operation of FPU and CPU --- p.15
    corecore