374 research outputs found

    Emergent consonantal quantity contrast and context-dependence of gestural phasing

    Get PDF
    Embodied Task Dynamics is a modeling platform combining task dynamical implementation of articulatory phonology with an optimization approach based on adjustable trade-offs between production efficiency and perception efficacy. Within this platform we model a consonantal quantity contrast in bilabial stops as emerging from local adjustment of demands on relative prominence of the consonantal gesture conceptualized in terms of closure duration. The contrast is manifested in the form of two distinct, stable inter-gestural coordination patterns characterized by quantitative differences in relative phasing between the consonant and the coproduced vocalic gesture. Furthermore, the model generates a set of qualitative predictions regarding dependence of kinematic characteristics and inter-gestural coordination on consonant quantity and gestural context. To evaluate these predictions, we collected articulatory data for Finnish speakers uttering singletons and geminates in the same context as explored by the model. Statistical analysis of the data shows strong agreement with model predictions. This result provides support for the hypothesis that speech articulation is guided by efficiency principles that underlie many other types of embodied skilled action.Peer reviewe

    Context-dependent articulation of consonant gemination in Estonian

    Get PDF
    Creative Commons Attribution License (CC BY 4.0)The three-way quantity system is a well-known phonological feature of Estonian. In a number of studies it has been shown that quantity is realized in a disyllabic foot by the stressed-to-unstressed syllable rhyme duration ratio and also by pitch movement as the secondary cue. The stressed syllable rhyme duration is achieved by combining the length of the vowel and the coda consonant, which enables minimal septets of CVCV-sequences based on segmental duration. In this study we analyze articulatory (EMA) recordings from four native Estonian speakers producing all possible quantity combinations of intervocalic bilabial stops in two vocalic contexts (/alpha-i/ vs. /i-alpha/). The analysis shows that kinematic characteristics (gesture duration, spatial extent, and peak velocity) are primarily affected by quantity on the segmental level: Phonologically longer segments are produced by longer and larger lip closing gestures and, in reverse, shorter and smaller lip opening movements. Tongue transition gesture is consistently lengthened and slowed down by increasing consonant quantity. In general, both kinematic characteristics and intergestural coordination are influenced by non-linear interactions between segmental quantity levels as well as vocalic context.Peer reviewe

    Modeling the development of pronunciation in infant speech acquisition.

    Get PDF
    Pronunciation is an important part of speech acquisition, but little attention has been given to the mechanism or mechanisms by which it develops. Speech sound qualities, for example, have just been assumed to develop by simple imitation. In most accounts this is then assumed to be by acoustic matching, with the infant comparing his output to that of his caregiver. There are theoretical and empirical problems with both of these assumptions, and we present a computational model- Elija-that does not learn to pronounce speech sounds this way. Elija starts by exploring the sound making capabilities of his vocal apparatus. Then he uses the natural responses he gets from a caregiver to learn equivalence relations between his vocal actions and his caregiver's speech. We show that Elija progresses from a babbling stage to learning the names of objects. This demonstrates the viability of a non-imitative mechanism in learning to pronounce

    ARTICULATORY INFORMATION FOR ROBUST SPEECH RECOGNITION

    Get PDF
    Current Automatic Speech Recognition (ASR) systems fail to perform nearly as good as human speech recognition performance due to their lack of robustness against speech variability and noise contamination. The goal of this dissertation is to investigate these critical robustness issues, put forth different ways to address them and finally present an ASR architecture based upon these robustness criteria. Acoustic variations adversely affect the performance of current phone-based ASR systems, in which speech is modeled as `beads-on-a-string', where the beads are the individual phone units. While phone units are distinctive in cognitive domain, they are varying in the physical domain and their variation occurs due to a combination of factors including speech style, speaking rate etc.; a phenomenon commonly known as `coarticulation'. Traditional ASR systems address such coarticulatory variations by using contextualized phone-units such as triphones. Articulatory phonology accounts for coarticulatory variations by modeling speech as a constellation of constricting actions known as articulatory gestures. In such a framework, speech variations such as coarticulation and lenition are accounted for by gestural overlap in time and gestural reduction in space. To realize a gesture-based ASR system, articulatory gestures have to be inferred from the acoustic signal. At the initial stage of this research an initial study was performed using synthetically generated speech to obtain a proof-of-concept that articulatory gestures can indeed be recognized from the speech signal. It was observed that having vocal tract constriction trajectories (TVs) as intermediate representation facilitated the gesture recognition task from the speech signal. Presently no natural speech database contains articulatory gesture annotation; hence an automated iterative time-warping architecture is proposed that can annotate any natural speech database with articulatory gestures and TVs. Two natural speech databases: X-ray microbeam and Aurora-2 were annotated, where the former was used to train a TV-estimator and the latter was used to train a Dynamic Bayesian Network (DBN) based ASR architecture. The DBN architecture used two sets of observation: (a) acoustic features in the form of mel-frequency cepstral coefficients (MFCCs) and (b) TVs (estimated from the acoustic speech signal). In this setup the articulatory gestures were modeled as hidden random variables, hence eliminating the necessity for explicit gesture recognition. Word recognition results using the DBN architecture indicate that articulatory representations not only can help to account for coarticulatory variations but can also significantly improve the noise robustness of ASR system

    The Status of Coronals in Standard American English . An Optimality-Theoretic Account

    Get PDF
    Coronals are very special sound segments. There is abundant evidence from various fields of phonetics which clearly establishes coronals as a class of consonants appropriate for phonological analysis. The set of coronals is stable across varieties of English unlike other consonant types, e.g. labials and dorsals, which are subject to a greater or lesser degree of variation. Coronals exhibit stability in inventories crosslinguistically, but they simultaneously display flexibility in alternations, i.e. assimilation, deletion, epenthesis, and dissimilation, when it is required by the contradictory forces of perception and production. The two main, opposing types of alternation that coronals in SAE participate in are examined. These are weakening phenomena, i.e. assimilation and deletion, and strengthening phenomena, i.e. epenthesis and dissimilation. Coronals are notorious for their contradictory behavior, especially in alternations. This type of behavior can be accounted for within a phonetically grounded OT framework that unites both phonetic and phonological aspects of alternations. Various sets of inherently conflicting FAITHFULNESS and MARKEDNESS constraints that are needed for an OT analysis of SAE alternations are intoduced

    Optimization-based modeling of suprasegmental speech timing

    Get PDF
    Windmann A. Optimization-based modeling of suprasegmental speech timing. Bielefeld: Universität Bielefeld; 2016

    Constrained Emergence of Universals and Variation in Syllable Systems

    Get PDF
    A computational model of emergent syllable systems is developed based on a set of functional constraints on syllable systems and the assumption that language structure emerges through cumulative change over time. The constraints were derived from general communicative factors as well as from the phonetic principles of perceptual distinctiveness and articulatory ease. Through evolutionary optimization, the model generated mock vocabularies optimized for the given constraints. Several simulations were run to understand how these constraints might define the emergence of universals and variation in complex sound systems. The predictions were that (1) CV syllables would be highly frequent in all vocabularies evolved under the constraints; (2) syllables with consonant clusters, consonant codas and vowel onsets would occur much less frequently; (3) a relationship would exist between the number of syllable types in a vocabulary and the average word length in the vocabulary; (4) different syllable types would emerge according to, what we termed, an <EM iterative principle of syllable structure> and their frequency would be directly related to their complexity; and (5) categorical differences would emerge between vocabularies evolved under the same constraints. Simulation results confirmed these predictions and provided novel insights into why regularities and differences may occur across languages. Specifically, the model suggested that both language universals and variation are consistent with a set of functional constraints that are fixed relative to one another. Language universals reflect underlying constraints on the system and language variation represents the many different and equally-good solutions to the unique problem defined by these constraints

    Timing in talking: What is it used for, and how is it controlled?

    Get PDF
    In the first part of the paper, we summarize the linguistic factors that shape speech timing patterns, including the prosodic structures which govern them, and suggest that speech timing patterns are used to aid utterance recognition. In the spirit of optimal control theory, we propose that recognition requirements are balanced against requirements such as rate of speech and style, as well as movement costs, to yield (near-)optimal planned surface timing patterns; additional factors may influence the implementation of that plan. In the second part of the paper, we discuss theories of timing control in models of speech production and motor control. We present three types of evidence that support models of speech production that involve extrinsic timing. These include (i) increasing variability with increases in interval duration, (ii) evidence that speakers refer to and plan surface durations, and (iii) independent timing of movement onsets and offsets

    Biologically inspired methods in speech recognition and synthesis: closing the loop

    Get PDF
    Current state-of-the-art approaches to computational speech recognition and synthesis are based on statistical analyses of extremely large data sets. It is currently unknown how these methods relate to the methods that the human brain uses to perceive and produce speech. In this thesis, I present a conceptual model, Sermo, which describes some of the computations that the human brain uses to perceive and produce speech. I then implement three large-scale brain models that accomplish tasks theorized to be required by Sermo, drawing upon techniques in automatic speech recognition, articulatory speech synthesis, and computational neuroscience. The first model extracts features from an audio signal by performing a frequency decomposition with an auditory periphery model, then decorrelating the information in that power spectrum with methods commonly used in audio and image compression. I show that the features produced by this model implemented with biologically plausible spiking neurons can be used to classify phones in pre-segmented speech with significantly better accuracy than the features typically used in automatic speech recognition systems. Additionally, I show that this model can be used to compare auditory periphery models in terms of their ability to support phone classification of pre-segmented speech. The second model uses a symbol-like neural representation of a sequence of syllables to generate a trajectory of premotor commands that can be used to control an articulatory synthesizer. I show that the model can produce trajectories up to several seconds in length from a static syllable sequence representation that result in intelligible synthesized speech. The trajectories reflect the high temporal variability of human speech, and smoothly transition between successive syllables, even in rapid utterances. The third model classifies syllables from a trajectory of premotor commands. I show that the model is able to classify syllables online despite high temporal variability, and can produce the same syllable representations used by the second model. These two models can be connected in future work in order to implement a closed-loop sensorimotor speech system. Unlike current computational approaches, all three of these models are implemented with biologically plausible spiking neurons, which can be simulated with neuromorphic hardware, and can interface naturally with artificial cochleas. All models are shown to scale to the level of adult human vocabularies in terms of the neural resources required, though limitations on their performance as a result of scaling will be discussed
    corecore