136 research outputs found

    An assessment of acoustic contrast between long and short vowels using convex hulls

    Get PDF
    An alternative to the spectral overlap assessment metric (SOAM), first introduced by Wassink [(2006). J. Acoust. Soc. Am. 119(4), 2334–2350], is introduced. The SOAM quantifies the intra- and inter-language differences between long–short vowel pairs through a comparison of spectral (F1, F2) and temporal properties modeled with best fit ellipses (F1 × F2 space) and ellipsoids (F1 × F2 × duration). However, the SOAM ellipses and ellipsoids rely on a Gaussian distribution of vowel data and a dense dataset, neither of which can be assumed in endangered languages or languages with limited available data. The method presented in this paper, called the Vowel Overlap Assessment with Convex Hulls (VOACH) method, improves upon the earlier metric through the use of best-fit convex shapes. The VOACH method reduces the incorporation of “empty” data into calculations of vowel space. Both methods are applied to Numu (Oregon Northern Paiute), an endangered language of the western United States. Calculations from the VOACH method suggest that Numu is a primary quantity language, a result that is well aligned with impressionistic analyses of spectral and durational data from the language and with observations by field researchers

    Vowel nasalization in German

    Get PDF

    Multiple acoustic cues for Korean stops and automatic speech recognition

    Get PDF
    The purpose of this thesis is to analyse acoustic characteristics of Korean stops by way of multivariate statistical tests, and to apply the results of the analysis in Automatic Speech Recognition (ASR) of Korean. Three acoustic cues that differentiate three types of Ko¬ rean oral stops are closure duration, Voice Onset Time (VOT) and fundamental frequency (FO) of a vowel after a stop. We review the characteristics of these parameters previously reported in various phonetic studies and test their usefulness for differentiating the three types of stops on two databases, one with controlled contexts, as in other phonetic stud¬ ies, and the other a continuous speech database designed for ASR. Statistical tests on both databases confirm that the three types of stops can be differentiated by the three acoustic parameters. In order to exploit these parameters for ASR, a context dependent Hidden Markov Model (HMM) based baseline system with a short pause model is built, which results in great improvement of performance compared to other systems. For mod¬ elling of the three acoustic parameters, an automatic segmentation technique for closure and VOT is developed. Samples of each acoustic parameter are modelled with univariate and multivariate probability distribution functions. Stop probability from these models is integrated by a post-processing technique. Our results show that integration of stop prob¬ ability does not make much improvement over the results of a baseline system. However, the results suggest that stop probabilities will be useful in determining the correct hy¬ pothesis with a larger lexicon containing more minimal pairs of words that differ by the identity of just one stop

    Articulatory Copy Synthesis Based on the Speech Synthesizer VocalTractLab

    Get PDF
    Articulatory copy synthesis (ACS), a subarea of speech inversion, refers to the reproduction of natural utterances and involves both the physiological articulatory processes and their corresponding acoustic results. This thesis proposes two novel methods for the ACS of human speech using the articulatory speech synthesizer VocalTractLab (VTL) to address or mitigate the existing problems of speech inversion, such as non-unique mapping, acoustic variation among different speakers, and the time-consuming nature of the process. The first method involved finding appropriate VTL gestural scores for given natural utterances using a genetic algorithm. It consisted of two steps: gestural score initialization and optimization. In the first step, gestural scores were initialized using the given acoustic signals with speech recognition, grapheme-to-phoneme (G2P), and a VTL rule-based method for converting phoneme sequences to gestural scores. In the second step, the initial gestural scores were optimized by a genetic algorithm via an analysis-by-synthesis (ABS) procedure that sought to minimize the cosine distance between the acoustic features of the synthetic and natural utterances. The articulatory parameters were also regularized during the optimization process to restrict them to reasonable values. The second method was based on long short-term memory (LSTM) and convolutional neural networks, which were responsible for capturing the temporal dependence and the spatial structure of the acoustic features, respectively. The neural network regression models were trained, which used acoustic features as inputs and produced articulatory trajectories as outputs. In addition, to cover as much of the articulatory and acoustic space as possible, the training samples were augmented by manipulating the phonation type, speaking effort, and the vocal tract length of the synthetic utterances. Furthermore, two regularization methods were proposed: one based on the smoothness loss of articulatory trajectories and another based on the acoustic loss between original and predicted acoustic features. The best-performing genetic algorithms and convolutional LSTM systems (evaluated in terms of the difference between the estimated and reference VTL articulatory parameters) obtained average correlation coefficients of 0.985 and 0.983 for speaker-dependent utterances, respectively, and their reproduced speech achieved recognition accuracies of 86.25% and 64.69% for speaker-independent utterances of German words, respectively. When applied to German sentence utterances, as well as English and Mandarin Chinese word utterances, the neural network based ACS systems achieved recognition accuracies of 73.88%, 52.92%, and 52.41%, respectively. The results showed that both of these methods not only reproduced the articulatory processes but also reproduced the acoustic signals of reference utterances. Moreover, the regularization methods led to more physiologically plausible articulatory processes and made the estimated articulatory trajectories be more articulatorily preferred by VTL, thus reproducing more natural and intelligible speech. This study also found that the convolutional layers, when used in conjunction with batch normalization layers, automatically learned more distinctive features from log power spectrograms. Furthermore, the neural network based ACS systems trained using German data could be generalized to the utterances of other languages
    corecore