596 research outputs found

    Statistical identification of articulatory roles in speech production.

    Get PDF
    The human speech apparatus is a rich source of information and offers many cues in the speech signal due to its biomechanical constraints and physiological interdependencies. Coarticulation, a direct consequence of these speech production factors, is one of the main problems affecting the performance of speech systems. Incorporation of production knowledge could potentially benefit speech recognisers and synthesisers. Hand coded rules and scores derived from the phonological knowledge used by production oriented models of speech are simple and incomplete representations of the complex speech production process. Statistical models built from measurements of speech articulation fail to identify the cause of constraints. There is a need for building explanatory yet descriptive models of articulation for understanding and modelling the effects of coarticulation. This thesis aims at providing compact descriptive models of realistic speech articulation by identifying and capturing the essential characteristics of human articulators using measurements from electro-magnetic articulography. The constraints on articulators during speech production are identified in the form of critical, dependent and redundant roles using entirely statistical and data-driven methods. The critical role captures the maximally constrained target driven behaviour of an articulator. The dependent role models the partial constraints due to physiological interdependencies. The redundant role reflects the unconstrained behaviour of an articulator which is maximally prone to coarticulation. Statistical target models are also obtained as the by-product of the identified roles. The algorithm for identification of articulatory roles (and estimation of respective model distributions) for each phone is presented and the results are critically evaluated. The identified data-driven constraints obtained are compared with the well known and commonly used constraints derived from the IPA (International Phonetic Alphabet). The identified critical roles were not only in agreement with the place and manner descriptions of each phone but also provided a phoneme to phone transformation by capturing language and speaker specific behaviour of articulators. The models trained from the identified constraints fitted better to the phone distributions (40% improvement) . The evaluation of the proposed search procedure with respect to an exhaustive search for identification of roles demonstrated that the proposed approach performs equally well for much less computational load. Articulation models built in the planning stage using sparse yet efficient articulatory representations using standard trajectory generation techniques showed some potential in modelling articulatory behaviour. Plenty of scope exists for further developing models of articulation from the proposed framework

    Data-Driven Critical Tract Variable Determination for European Portuguese

    Get PDF
    Technologies, such as real-time magnetic resonance (RT-MRI), can provide valuable information to evolve our understanding of the static and dynamic aspects of speech by contributing to the determination of which articulators are essential (critical) in producing specific sounds and how (gestures). While a visual analysis and comparison of imaging data or vocal tract profiles can already provide relevant findings, the sheer amount of available data demands and can strongly profit from unsupervised data-driven approaches. Recent work, in this regard, has asserted the possibility of determining critical articulators from RT-MRI data by considering a representation of vocal tract configurations based on landmarks placed on the tongue, lips, and velum, yielding meaningful results for European Portuguese (EP). Advancing this previous work to obtain a characterization of EP sounds grounded on Articulatory Phonology, important to explore critical gestures and advance, for example, articulatory speech synthesis, entails the consideration of a novel set of tract variables. To this end, this article explores critical variable determination considering a vocal tract representation aligned with Articulatory Phonology and the Task Dynamics framework. The overall results, obtained considering data for three EP speakers, show the applicability of this approach and are consistent with existing descriptions of EP sounds

    Multi-Level Audio-Visual Interactions in Speech and Language Perception

    Get PDF
    That we perceive our environment as a unified scene rather than individual streams of auditory, visual, and other sensory information has recently provided motivation to move past the long-held tradition of studying these systems separately. Although they are each unique in their transduction organs, neural pathways, and cortical primary areas, the senses are ultimately merged in a meaningful way which allows us to navigate the multisensory world. Investigating how the senses are merged has become an increasingly wide field of research in recent decades, with the introduction and increased availability of neuroimaging techniques. Areas of study range from multisensory object perception to cross-modal attention, multisensory interactions, and integration. This thesis focuses on audio-visual speech perception, with special focus on facilitatory effects of visual information on auditory processing. When visual information is concordant with auditory information, it provides an advantage that is measurable in behavioral response times and evoked auditory fields (Chapter 3) and in increased entrainment to multisensory periodic stimuli reflected by steady-state responses (Chapter 4). When the audio-visual information is incongruent, the combination can often, but not always, combine to form a third, non-physically present percept (known as the McGurk effect). This effect is investigated (Chapter 5) using real word stimuli. McGurk percepts were not robustly elicited for a majority of stimulus types, but patterns of responses suggest that the physical and lexical properties of the auditory and visual stimulus may affect the likelihood of obtaining the illusion. Together, these experiments add to the growing body of knowledge that suggests that audio-visual interactions occur at multiple stages of processing

    Windows into Sensory Integration and Rates in Language Processing: Insights from Signed and Spoken Languages

    Get PDF
    This dissertation explores the hypothesis that language processing proceeds in "windows" that correspond to representational units, where sensory signals are integrated according to time-scales that correspond to the rate of the input. To investigate universal mechanisms, a comparison of signed and spoken languages is necessary. Underlying the seemingly effortless process of language comprehension is the perceiver's knowledge about the rate at which linguistic form and meaning unfold in time and the ability to adapt to variations in the input. The vast body of work in this area has focused on speech perception, where the goal is to determine how linguistic information is recovered from acoustic signals. Testing some of these theories in the visual processing of American Sign Language (ASL) provides a unique opportunity to better understand how sign languages are processed and which aspects of speech perception models are in fact about language perception across modalities. The first part of the dissertation presents three psychophysical experiments investigating temporal integration windows in sign language perception by testing the intelligibility of locally time-reversed sentences. The findings demonstrate the contribution of modality for the time-scales of these windows, where signing is successively integrated over longer durations (~ 250-300 ms) than in speech (~ 50-60 ms), while also pointing to modality-independent mechanisms, where integration occurs in durations that correspond to the size of linguistic units. The second part of the dissertation focuses on production rates in sentences taken from natural conversations of English, Korean, and ASL. Data from word, sign, morpheme, and syllable rates suggest that while the rate of words and signs can vary from language to language, the relationship between the rate of syllables and morphemes is relatively consistent among these typologically diverse languages. The results from rates in ASL also complement the findings in perception experiments by confirming that time-scales at which phonological units fluctuate in production match the temporal integration windows in perception. These results are consistent with the hypothesis that there are modality-independent time pressures for language processing, and discussions provide a synthesis of converging findings from other domains of research and propose ideas for future investigations

    The Status of Coronals in Standard American English . An Optimality-Theoretic Account

    Get PDF
    Coronals are very special sound segments. There is abundant evidence from various fields of phonetics which clearly establishes coronals as a class of consonants appropriate for phonological analysis. The set of coronals is stable across varieties of English unlike other consonant types, e.g. labials and dorsals, which are subject to a greater or lesser degree of variation. Coronals exhibit stability in inventories crosslinguistically, but they simultaneously display flexibility in alternations, i.e. assimilation, deletion, epenthesis, and dissimilation, when it is required by the contradictory forces of perception and production. The two main, opposing types of alternation that coronals in SAE participate in are examined. These are weakening phenomena, i.e. assimilation and deletion, and strengthening phenomena, i.e. epenthesis and dissimilation. Coronals are notorious for their contradictory behavior, especially in alternations. This type of behavior can be accounted for within a phonetically grounded OT framework that unites both phonetic and phonological aspects of alternations. Various sets of inherently conflicting FAITHFULNESS and MARKEDNESS constraints that are needed for an OT analysis of SAE alternations are intoduced

    Articulatory features for robust visual speech recognition

    Full text link

    Speech Communication

    Get PDF
    Contains reports on four research projects.C.J. LeBel FellowshipKurzweil Applied IntelligenceNational Institutes of Health (Grant 5 T32 NS07040)National Institutes of Health (Grant 5 RO1 NS04332)National Science Foundation (Grant BNS84-18733)Systems Development FoundationU.S. Navy - Office of Naval Research (Contract N00014-82-K-0727

    Perception of English and Polish obstruents

    Get PDF
    Praca niniejsza koncentruje się na kontraście dźwięczna-bezdźwięczna w percepcji angielskich i polskich spółgłosek właściwych. Metodologia badań oparta została na manipulacji akustycznej parametrów temporalnych i spektralnych, które biorą udział w implementacji kontrastu dźwięczności w badanych językach. Porównane zastałych trzy grupy badanych – początkujący uczący się języka angielskiego, zaawansowani użytkownicy języka angielskiego, oraz rodowici mówcy języka angielskiego. Praca składa się z dwóch części teoretycznych, ilustrujących problematykę i kontrastujących strategie implementacji kontrastu dźwięczności w badanych językach, oraz części badawczej, prezentującej zastosowaną metodologię badań oraz analizę wyników. Część pierwsza porusza problem roli percepcji mowy w badaniach językoznawczych. Dotyka takich aspektów jak brak bezpośredniej relacji między sygnałem dźwiękowym a kategorią fonologiczną, wyjątkowa plastyczność i zdolność adaptacyjna ludzkiej percepcji mowy, oraz referuje propozycje dotyczące kompleksowego opisu działania ludzkiej percepcji mowy. W kolejnych podrozdziałach praca omawia percepcję w kontekście kontaktu językowego, a więc rozróżnianie kontrastów akustycznych występujących w języku obcym, ale nieobecnych w języku pierwszym. Zostają również zrecenzowane modele, które taki proces opisują, jak i hipotezy opisujące potencjalny sukces w opanowaniu efektywnej percepcji kontrastów percepcyjnych występujących w języku obcym. Część druga koncentruje się na różnicach temporalnych i akustycznych w implementacji dźwięczności w języku angielskim i polskim. Opisane zostają aspekty takie jak; Voice Onset Time, długość samogłoski, długość zwarcia, długość frykcji, ubezdźwięcznienie, długość wybuchu. Cześć trzecia, badawcza, prezentuje materiał poddany badaniu, metodologię manipulacji materiału, oraz charakterystykę grup. Hipotezy oparte na założeniach teoretycznych są następnie weryfikowane przy pomocy otrzymanych wyników. Część końcowa omawia problemy percepcyjne, jakie spotykają Polaków uczących się języka angielskiego oraz wyciąga wnioski pedagogiczne

    Investigating Non-Uniqueness in the Acoustic-Articulatory Inversion Mapping

    Get PDF
    The task of inferring articulatory configurations from a given acoustic signal is a problem for which a reliable and accurate solution has been lacking for a number of decades. The changing shape of the vocal-tract is responsible for altering the parameters of sound. Each different configuration of articulators will regularly lead to a single distinct sound being produced (a unique mapping from the articulator to the acoustics). Therefore, it should be possible to take an acoustic signal and invert the process, giving the exact vocal-tract shape for a given sound. This would have wide-reaching applications in the field of speech and language technology, such as in improving facial animation and speech recognition systems. Using vocal-tract information inferred from the acoustic signal can facilitate a richer understanding of the actual constraints in articulator movement. However, research concerned with the inversion mapping has revealed that there is often a multi-valued mapping from the acoustic domain to the articulatory domain. Work in identifying and resolving this non-uniqueness thus far has been somewhat successful, with Mixture-Density Networks (MDN) and articulator trajectory systems presenting probabilistic methods of finding the most likely articulatory configuration for a given signal. Using an subset of an EMA corpus, along with a combination of an instantaneous inversion mapping and a non-parametric clustering algorithm, I aim to quantify the extent to which acoustically similar vectors to a given phone can exhibit qualitatively different vocal-tract shapes. Categorical identification of acoustically similar sounds that can have shown a multi-valued mapping in the articulatory domain, as well as identifying which articulators this occurs for, could be key to resolving issues in the reliability and quality of the inversion mapping
    corecore