211 research outputs found

    Tongue Movements in Feeding and Speech

    Get PDF
    The position of the tongue relative to the upper and lower jaws is regulated in part by the position of the hyoid bone, which, with the anterior and posterior suprahyoid muscles, controls the angulation and length of the floor of the mouth on which the tongue body \u27rides\u27. The instantaneous shape of the tongue is controlled by the \u27extrinsic muscles \u27 acting in concert with the \u27intrinsic \u27 muscles. Recent anatomical research in non-human mammals has shown that the intrinsic muscles can best be regarded as a \u27laminated segmental system \u27 with tightly packed layers of the \u27transverse\u27, \u27longitudinal\u27, and \u27vertical\u27 muscle fibers. Each segment receives separate innervation from branches of the hypoglosssal nerve. These new anatomical findings are contributing to the development of functional models of the tongue, many based on increasingly refined finite element modeling techniques. They also begin to explain the observed behavior of the jaw-hyoid-tongue complex, or the hyomandibular \u27kinetic chain\u27, in feeding and consecutive speech. Similarly, major efforts, involving many imaging techniques (cinefluorography, ultrasound, electro-palatography, NMRI, and others), have examined the spatial and temporal relationships of the tongue surface in sound production. The feeding literature shows localized tongue-surface change as the process progresses. The speech literature shows extensive change in tongue shape between classes of vowels and consonants. Although there is a fundamental dichotomy between the referential framework and the methodological approach to studies of the orofacial complex in feeding and speech, it is clear that many of the shapes adopted by the tongue in speaking are seen in feeding. It is suggested that the range of shapes used in feeding is the matrix for both behaviors

    Evolution of the speech‐ready brain: The voice/jaw connection in the human motor cortex

    Get PDF
    A prominent model of the origins of speech, known as the “frame/content” theory, posits that oscillatory lowering and raising of the jaw provided an evolutionary scaffold for the development of syllable structure in speech. Because such oscillations are nonvocal in most nonhuman primates, the evolution of speech required the addition of vocalization onto this scaffold in order to turn such jaw oscillations into vocalized syllables. In the present functional MRI study, we demonstrate overlapping somatotopic representations between the larynx and the jaw muscles in the human primary motor cortex. This proximity between the larynx and jaw in the brain might support the coupling between vocalization and jaw oscillations to generate syllable structure. This model suggests that humans inherited voluntary control of jaw oscillations from ancestral species, but added voluntary control of vocalization onto this via the evolution of a new brain area that came to be situated near the jaw region in the human motor cortex

    Influences of tongue biomechanics on speech movements during the production of velar stop consonants: a modeling study

    Get PDF
    This study explores the following hypothesis: forward looping movements of the tongue that are observed in VCV sequences are due partly to the anatomical arrangement of the tongue muscles and how they are used to produce a velar closure. The study uses an anatomically based 2D biomechanical tongue model. Tissue elastic properties are accounted for in finite-element modeling, and movement is controlled by constant-rate control parameter shifts. Tongue raising and lowering movements are produced by the model with the combined actions of the genioglossus, styloglossus and hyoglossus. Simulations of V1CV2 movements were made, where C is a velar consonant and V is [a], [i] or [u]. If V1 is one of the vowels [a] and [u], the resulting trajectories describe movements that begin to loop forward before consonant closure and continue to slide along the palate during the closure. This prediction is in agreement with classical data published in the literature. If V1 is vowel [i], we observe a small backward movement. This is also in agreement with some measurements on human speakers, but it is also in contradiction with the original data published by Houde (1967). These observations support the idea that the biomechanical properties of the tongue could be the main factor responsible for the forward loops when V1 is a back vowel. In the left [i] context, it seems that additional factors have to be taken into considerations, in order to explain the observations made on some speaker

    Biomechanics of the orofacial motor system: Influence of speaker-specific characteristics on speech production

    No full text
    International audienceOrofacial biomechanics has been shown to influence the time signals of speech production and to impose constraints with which the central nervous system has to contend in order to achieve the goals of speech production. After a short explanation of the concept of biomechanics and its link with the variables usually measured in phonetics, two modeling studies are presented, which exemplify the influence of speaker-specific vocal tract morphology and muscle anatomy on speech production. First, speaker-specific 2D biomechanical models of the vocal tract were used that accounted for inter-speaker differences in head morphology. In particular, speakers have different main fiber orientations in the Styloglossus Muscle. Focusing on vowel /i/ it was shown that these differences induce speaker-specific susceptibility to changes in this muscle's activation. Second, the study by Stavness et al. (2013) is summarized. These authors investigated the role of a potential inter-speaker variability of the Orbicularis Oris Muscle implementation with a 3D biomechanical face model. A deeper implementation tends to reduce lip aperture; an increase in peripheralness tends to increase lip protrusion. With these studies, we illustrate the fact that speaker-specific orofacial biomechanics influences the patterns of articulatory and acoustic variability, and the emergence of speech control strategies

    Phonatory and articulatory representations of speech production in cortical and subcortical fMRI responses

    Get PDF
    Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.Tis work was supported by the Spanish Ministry of Economy and Competitiveness through the Juan de la Cierva Fellowship (FJCI-2015-26814), and the Ramon y Cajal Fellowship (RYC-2017- 21845), the Spanish State Research Agency through the BCBL “Severo Ochoa” excellence accreditation (SEV-2015-490), the Basque Government (BERC 2018- 2021) and the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant (No 799554).info:eu-repo/semantics/publishedVersio

    How does human motor cortex regulate vocal pitch in singers?

    Get PDF
    Vocal pitch is used as an important communicative device by humans, as found in the melodic dimension of both speech and song. Vocal pitch is determined by the degree of tension in the vocal folds of the larynx, which itself is influenced by complex and nonlinear interactions among the laryngeal muscles. The relationship between these muscles and vocal pitch has been described by a mathematical model in the form of a set of ‘control rules’. We searched for the biological implementation of these control rules in the larynx motor cortex of the human brain. We scanned choral singers with functional magnetic resonance imaging as they produced discrete pitches at four different levels across their vocal range. While the locations of the larynx motor activations varied across singers, the activation peaks for the four pitch levels were highly consistent within each individual singer. This result was corroborated using multi-voxel pattern analysis, which demonstrated an absence of patterned activations differentiating any pairing of pitch levels. The complex and nonlinear relationships between the multiple laryngeal muscles that control vocal pitch may obscure the neural encoding of vocal pitch in the brain

    Towards a silent speech interface for Portuguese: Surface electromyography and the nasality challenge

    Get PDF
    A Silent Speech Interface (SSI) aims at performing Automatic Speech Recognition (ASR) in the absence of an intelligible acoustic signal. It can be used as a human-computer interaction modality in high-background-noise environments, such as living rooms, or in aiding speech-impaired individuals, increasing in prevalence with ageing. If this interaction modality is made available for users own native language, with adequate performance, and since it does not rely on acoustic information, it will be less susceptible to problems related to environmental noise, privacy, information disclosure and exclusion of speech impaired persons. To contribute to the existence of this promising modality for Portuguese, for which no SSI implementation is known, we are exploring and evaluating the potential of state-of-the-art approaches. One of the major challenges we face in SSI for European Portuguese is recognition of nasality, a core characteristic of this language Phonetics and Phonology. In this paper a silent speech recognition experiment based on Surface Electromyography is presented. Results confirmed recognition problems between minimal pairs of words that only differ on nasality of one of the phones, causing 50% of the total error and evidencing accuracy performance degradation, which correlates well with the exiting knowledge.info:eu-repo/semantics/acceptedVersio
    corecore