147 research outputs found

    A phonetic concatenative approach of labial coarticulation

    Get PDF
    International audiencePredicting the effects of labial coarticulation is an important aspect with a view to developing an artificial talking head. This paper describes a concatenation approach that uses sigmoids to represent the evolution of labial parameters. Labial parameters considered are lip aperture, protrusion, stretching and jaw aperture. A first formal algorithm determines the relevant transitions, i.e. those corresponding to phonemes imposing constraints on one of the labial parameters. Then relevant transitions are either retrieved or interpolated from a set of reference sigmoids which have been trained on a speaker specific corpus. This labial corpus is made up of isolated vowels, CV, VCV, VCCV and 100 sentences. A final stage consists in improving the overall syntagmatic consistency of the concatenation

    Comparison between two predicting methods of labial coarticulation

    Get PDF
    International audienceThe construction of a highly intelligible talking head involving relevant lip gestures is especially important for hearing impaired people. This requires realistic rendering of lip and jaw movements and thus relevant modeling of lip coarticulation. This paper presents the comparison between the Cohen & Massaro prediction algorithm and our concatenation plus completion strategy guided by phonetic knowledge. Although results show that Cohen & Massaro perform slightly better, the concatenation and completion strategy approximates consonant clusters markedly better particularly for the protrusion parameter. These results also show the concatenation and completion strategy could be easily improved via the recording of better reference models for isolated vowels

    Phoneme Recognition Using Acoustic Events

    Get PDF
    This paper presents a new approach to phoneme recognition using nonsequential sub--phoneme units. These units are called acoustic events and are phonologically meaningful as well as recognizable from speech signals. Acoustic events form a phonologically incomplete representation as compared to distinctive features. This problem may partly be overcome by incorporating phonological constraints. Currently, 24 binary events describing manner and place of articulation, vowel quality and voicing are used to recognize all German phonemes. Phoneme recognition in this paradigm consists of two steps: After the acoustic events have been determined from the speech signal, a phonological parser is used to generate syllable and phoneme hypotheses from the event lattice. Results obtained on a speaker--dependent corpus are presented.Comment: 4 pages, to appear at ICSLP'94, PostScript version (compressed and uuencoded

    HMM-based Automatic Visual Speech Segmentation Using Facial Data

    Get PDF
    International audienceWe describe automatic visual speech segmentation using facial data captured by a stereo-vision technique. The segmentation is performed using an HMM-based forced alignment mechanism widely used in automatic speech recognition. The idea is based on the assumption that using visual speech data alone for the training might capture the uniqueness in the facial compo- nent of speech articulation, asynchrony (time lags) in visual and acoustic speech segments and significant coarticulation effects. This should provide valuable information that helps to show the extent to which a phoneme may affect surrounding phonemes visually. This should provide information valuable in labeling the visual speech segments based on dominant coarticulatory contexts

    Challenges in analysis and processing of spontaneous speech

    Get PDF
    Selected and peer-reviewed papers of the workshop entitled Challenges in Analysis and Processing of Spontaneous Speech (Budapest, 2017

    Segmental and prosodic improvements to speech generation

    Get PDF

    Fast Speech in Unit Selection Speech Synthesis

    Get PDF
    Moers-Prinz D. Fast Speech in Unit Selection Speech Synthesis. Bielefeld: Universität Bielefeld; 2020.Speech synthesis is part of the everyday life of many people with severe visual disabilities. For those who are reliant on assistive speech technology the possibility to choose a fast speaking rate is reported to be essential. But also expressive speech synthesis and other spoken language interfaces may require an integration of fast speech. Architectures like formant or diphone synthesis are able to produce synthetic speech at fast speech rates, but the generated speech does not sound very natural. Unit selection synthesis systems, however, are capable of delivering more natural output. Nevertheless, fast speech has not been adequately implemented into such systems to date. Thus, the goal of the work presented here was to determine an optimal strategy for modeling fast speech in unit selection speech synthesis to provide potential users with a more natural sounding alternative for fast speech output

    Domain-optimized Chinese speech generation.

    Get PDF
    Fung Tien Ying.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 119-128).Abstracts in English and Chinese.Abstract --- p.1Acknowledgement --- p.1List of Figures --- p.7List of Tables --- p.11Chapter 1 --- Introduction --- p.14Chapter 1.1 --- General Trends on Speech Generation --- p.15Chapter 1.2 --- Domain-Optimized Speech Generation in Chinese --- p.16Chapter 1.3 --- Thesis Organization --- p.17Chapter 2 --- Background --- p.19Chapter 2.1 --- Linguistic and Phonological Properties of Chinese --- p.19Chapter 2.1.1 --- Articulation --- p.20Chapter 2.1.2 --- Tones --- p.21Chapter 2.2 --- Previous Development in Speech Generation --- p.22Chapter 2.2.1 --- Articulatory Synthesis --- p.23Chapter 2.2.2 --- Formant Synthesis --- p.24Chapter 2.2.3 --- Concatenative Synthesis --- p.25Chapter 2.2.4 --- Existing Systems --- p.31Chapter 2.3 --- Our Speech Generation Approach --- p.35Chapter 3 --- Corpus-based Syllable Concatenation: A Feasibility Test --- p.37Chapter 3.1 --- Capturing Syllable Coarticulation with Distinctive Features --- p.39Chapter 3.2 --- Creating a Domain-Optimized Wavebank --- p.41Chapter 3.2.1 --- Generate-and-Filter --- p.44Chapter 3.2.2 --- Waveform Segmentation --- p.47Chapter 3.3 --- The Use of Multi-Syllable Units --- p.49Chapter 3.4 --- Unit Selection for Concatenative Speech Output --- p.50Chapter 3.5 --- A Listening Test --- p.51Chapter 3.6 --- Chapter Summary --- p.52Chapter 4 --- Scalability and Portability to the Stocks Domain --- p.55Chapter 4.1 --- Complexity of the ISIS Responses --- p.56Chapter 4.2 --- XML for input semantic and grammar representation --- p.60Chapter 4.3 --- Tree-Based Filtering Algorithm --- p.63Chapter 4.4 --- Energy Normalization --- p.67Chapter 4.5 --- Chapter Summary --- p.69Chapter 5 --- Investigation in Tonal Contexts --- p.71Chapter 5.1 --- The Nature of Tones --- p.74Chapter 5.1.1 --- Human Perception of Tones --- p.75Chapter 5.2 --- Relative Importance of Left and Right Tonal Context --- p.77Chapter 5.2.1 --- Tonal Contexts in the Date-Time Subgrammar --- p.77Chapter 5.2.2 --- Tonal Contexts in the Numeric Subgrammar --- p.82Chapter 5.2.3 --- Conclusion regarding the Relative Importance of Left versus Right Tonal Contexts --- p.86Chapter 5.3 --- Selection Scheme for Tonal Variants --- p.86Chapter 5.3.1 --- Listening Test for our Tone Backoff Scheme --- p.90Chapter 5.3.2 --- Error Analysis --- p.92Chapter 5.4 --- Chapter Summary --- p.94Chapter 6 --- Summary and Future Work --- p.95Chapter 6.1 --- Contributions --- p.97Chapter 6.2 --- Future Directions --- p.98Chapter A --- Listening Test Questionnaire for FOREX Response Genera- tion --- p.100Chapter B --- Major Response Types For ISIS --- p.102Chapter C --- Recording Corpus for Tone Investigation in Date-time Sub- grammar --- p.105Chapter D --- Statistical Test for Left Tonal Context --- p.109Chapter E --- Statistical Test for Right Tonal Context --- p.112Chapter F --- Listening Test Questionnaire for Backoff Unit Selection Scheme --- p.115Chapter G --- Statistical Test for the Backoff Unit Selection Scheme --- p.117Chapter H --- Statistical Test for the Backoff Unit Selection Scheme --- p.118Bibliography --- p.11
    • …
    corecore