2,477 research outputs found
Articulating: the neural mechanisms of speech production
Speech production is a highly complex sensorimotor task involving tightly coordinated processing across large expanses of the cerebral cortex. Historically, the study of the neural underpinnings of speech suffered from the lack of an animal model. The development of non-invasive structural and functional neuroimaging techniques in the late 20th century has dramatically improved our understanding of the speech network. Techniques for measuring regional cerebral blood flow have illuminated the neural regions involved in various aspects of speech, including feedforward and feedback control mechanisms. In parallel, we have designed, experimentally tested, and refined a neural network model detailing the neural computations performed by specific neuroanatomical regions during speech. Computer simulations of the model account for a wide range of experimental findings, including data on articulatory kinematics and brain activity during normal and perturbed speech. Furthermore, the model is being used to investigate a wide range of communication disorders.R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; R01 DC016270 - NIDCD NIH HHSAccepted manuscrip
Recommended from our members
The challenges of viewpoint-taking when learning a sign language: Data from the 'frog story' in British Sign Language
Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learners’ and deaf signers’ narratives did not differ in overall duration, learners’ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously
Speaking Rate Effects on Normal Aspects of Articulation: Outcomes and Issues
The articulatory effects of speaking rate have been a point of focus for a substantial literature in speech science. The normal aspects of speaking rate variation have influenced theories and models of speech production and perception in the literature pertaining to both normal and disordered speech. While the body of literature pertaining to the articulatory effects of speaking rate change is reasonably large, few speaker-general outcomes have emerged. The purpose of this paper is to review outcomes of the existing literature and address problems related to the study of speaking rate that may be germane to the recurring theme that speaking rate effects are largely idiosyncratic
From Holistic to Discrete Speech Sounds: The Blind Snow-Flake Maker Hypothesis
Sound is a medium used by humans to carry information.
The existence of this kind of
medium is a pre-requisite for language. It is organized
into a code, called speech, which
provides a repertoire of forms that is shared in each
language community. This code is necessary to support the linguistic
interactions that allow humans to communicate.
How then may a speech code be formed prior to the
existence of linguistic interactions?
Moreover, the human speech code is characterized by several
properties: speech is digital and compositional (vocalizations
are made of units re-used systematically in other syllables);
phoneme inventories have precise regularities as well as
great diversity in human languages; all the speakers of a
language community categorize sounds in the same manner,
but each language has its own system of categorization,
possibly very different from every other.
How can a speech code with these properties form?
These are the questions we will approach in the paper. We will
study them using the method of the artificial. We will
build a society of artificial agents, and study what mechanisms
may provide answers. This will not prove directly what mechanisms
were used for humans, but rather give ideas about what kind
of mechanism may have been used. This allows us to shape the
search space of possible answers, in particular by showing
what is sufficient and what is not necessary.
The mechanism we present is based on a low-level model of
sensory-motor interactions. We show that the integration of certain very
simple and non language-specific neural devices
allows a population of agents to build a speech code that
has the properties mentioned above. The originality is
that it pre-supposes neither a functional pressure for
communication, nor the ability to have coordinated
social interactions (they do not play language or imitation
games). It relies on the self-organizing properties of a generic
coupling between perception and production both
within agents, and on the interactions between agents
From Analogue to Digital Vocalizations
Sound is a medium used by humans to carry information.
The existence of this kind of
medium is a pre-requisite for language. It is organized
into a code, called speech, which
provides a repertoire of forms that is shared in each
language community. This code is necessary to support the linguistic
interactions that allow humans to communicate.
How then may a speech code be formed prior to the
existence of linguistic interactions?
Moreover, the human speech code is characterized by several
properties: speech is digital and compositional (vocalizations
are made of units re-used systematically in other syllables);
phoneme inventories have precise regularities as well as
great diversity in human languages; all the speakers of a
language community categorize sounds in the same manner,
but each language has its own system of categorization,
possibly very different from every other.
How can a speech code with these properties form?
These are the questions we will approach in the paper. We will
study them using the method of the artificial. We will
build a society of artificial agents, and study what mechanisms
may provide answers. This will not prove directly what mechanisms
were used for humans, but rather give ideas about what kind
of mechanism may have been used. This allows us to shape the
search space of possible answers, in particular by showing
what is sufficient and what is not necessary.
The mechanism we present is based on a low-level model of
sensory-motor interactions. We show that the integration of certain very
simple and non language-specific neural devices
allows a population of agents to build a speech code that
has the properties mentioned above. The originality is
that it pre-supposes neither a functional pressure for
communication, nor the ability to have coordinated
social interactions (they do not play language or imitation
games). It relies on the self-organizing properties of a generic
coupling between perception and production both
within agents, and on the interactions between agents
The validation of a new articulator system for orthognathic model surgery
A review of the literature showed that the outcome of orthognathic surgery may differ from the planned outcome, that casts mounted on semi-adjustable articulators show systematic errors of orientation and that there may be a causal connection between them.
It was demonstrated that the movements of casts mounted on, and moved relative to, a standard articulator produced movements of different magnitudes relative to the natural head position. A mathematical model was developed to quantify the difference and the predictions of the resulting equations were confirmed in a photographic study using image analysis.
The second stage of the study compared a standard and the orthognathic articulator. Plastic model skulls were mounted at different angulations to represent different natural head positions. Casts of the maxillary teeth of the skulls mounted on the orthognathic articulator accurately reproduced the occlusal plane angles of the skulls, but those mounted on the standard articulator showed systematic errors of up to 28Âş. Surgical movements of the maxilla were reproduced using perioperative wafers constructed on casts mounted on the standard and orthognathic articulators. The accuracy of the maxillary repositioning was assessed at five anatomical reference points on the skulls. The results indicated that the orthognathic articulator was significantly more accurate than the standard articulator
Lingual articulation in children with developmental speech disorders
This thesis presents thirteen research papers published between 1987-97, and a summary and discussion of their contribution to the field of developmental speech disorders. The publications collectively constitute a body of work with two overarching themes. The first is methodological: all the publications report articulatory data relating to tongue movements recorded using the instrumental technique of electropalatography (EPG). The second is the clinical orientation of the research: the EPG data are interpreted throughout for the purpose of informing the theory and practice of speech pathology. The majority of the publications are original, experimental studies of lingual articulation in children with developmental speech disorders. At the same time the publications cover a broad range of theoretical and clinical issues relating to lingual articulation including: articulation in normal speakers, the clinical applications of EPG, data analysis procedures, articulation in second language learners, and the effect of oral surgery on articulation.
The contribution of the publications to the field of developmental speech disorders of unknown origin, also known as phonological impairment or functional articulation disorder, is summarised and discussed. In total, EPG data from fourteen children are reported. The collective results from the publications do not support the cognitive/linguistic explanation of developmental speech disorders. Instead, the EPG findings are marshalled to build the case that specific deficits in speech motor control can account for many of the diverse speech error characteristics identified by perceptual analysis in previous studies.
Some of the children studied had speech motor deficits that were relatively discrete, involving, for example, an apparently isolated difficulty with tongue tiplblade groove formation for sibilant targets. Articulatory difficulties of the 'discrete' or specific type are consistent with traditional views of functional lingual articulation in developmental speech disorders articulation disorder. EPG studies of tongue control in normal adults provided insights into a different type of speech motor control deficit observed in the speech of many of the children studied. Unlike the children with discrete articulatory difficulties, others produced abnormal EPG patterns for a wide range of lingual targets. These abnormal gestures were characterised by broad, undifferentiated tongue-palate contact, accompanied by variable approach and release phases. These 'widespread', undifferentiated gestures are interpreted as constituting a previously undescribed form of speech motor deficit, resulting from a difficulty in controlling the tongue tip/blade system independently of the tongue body. Undifferentiated gestures were found to result in variable percepts depending on the target and the timing of the particular gesture, and may manifest as perceptually acceptable productions, phonological substitutions or phonetic distortions.
It is suggested that discrete and widespread speech motor deficits reflect different stages along a developmental or severity continuum, rather than distinct subgroups with different underlying deficits. The children studied all manifested speech motor control deficits of varying degrees along this continuum. It is argued that it is the unique anatomical properties of the tongue, combined with the high level of spatial and temporal accuracy required for tongue tiplblade and tongue body co-ordination, that put lingual control specifically at risk in young children. The EPG findings question the validity of assumptions made about the presence/absence of speech motor control deficits, when such assumptions are based entirely on non-instrumental assessment procedures.
A novel account of the sequence of acquisition of alveolar stop articulation in children with normal speech development is proposed, based on the EPG data from the children with developmental speech disorders. It is suggested that broad, undifferentiated gestures may occur in young normal children, and that adult-like lingual control develops gradually through the processes of differentiation and integration. Finally, the EPG fmdings are discussed in relation to two recent theoretical frameworks, that of psycho linguistic models and a dynamic systems approach to speech acquisition
- …