27 research outputs found

    Speech Perception under the Tent: A Domain-general Predictive Role for the Cerebellum

    Get PDF
    The role of the cerebellum in speech perception remains a mystery. Given its uniform architecture, we tested the hypothesis that it implements a domain-general predictive mechanism whose role in speech is determined by connectivity. We collated all neuroimaging studies reporting cerebellar activity in the Neurosynth database (n = 8206). From this set, we found all studies involving passive speech and sound perception (n = 72, 64% speech, 12.5% sounds, 12.5% music, and 11% tones) and speech production and articulation (n = 175). Standard and coactivation neuroimaging meta-analyses were used to compare cerebellar and associated cortical activations between passive perception and production. We found distinct regions of perception- and production-related activity in the cerebellum and regions of perception–production overlap. Each of these regions had distinct patterns of cortico-cerebellar connectivity. To test for domain-generality versus specificity, we identified all psychological and task-related terms in the Neurosynth database that predicted activity in cerebellar regions associated with passive perception and production. Regions in the cerebellum activated by speech perception were associated with domain-general terms related to prediction. One hallmark of predictive processing is metabolic savings (i.e., decreases in neural activity when events are predicted). To test the hypothesis that the cerebellum plays a predictive role in speech perception, we examined cortical activation between studies reporting cerebellar activation and those without cerebellar activation during speech perception. When the cerebellum was active during speech perception, there was far less cortical activation than when it was inactive. The results suggest that the cerebellum implements a domain-general mechanism related to prediction during speech perception

    The Role of Sensory Feedback in Developmental Stuttering: A Review

    Get PDF
    Developmental stuttering is a neurodevelopmental disorder that severely affects speech fluency. Multiple lines of evidence point to a role of sensory feedback in the disorder; this has led to a number of theories proposing different disruptions to the use of sensory feedback during speech motor control in people who stutter. The purpose of this review was to bring together evidence from studies using altered auditory feedback paradigms with people who stutter, in order to evaluate the predictions of these different theories. This review highlights converging evidence for particular patterns of differences in the responses of people who stutter to feedback perturbations. The implications for hypotheses on the nature of the disruption to sensorimotor control of speech in the disorder are discussed, with reference to neurocomputational models of speech control (predominantly, the DIVA model; Guenther et al., 2006; Tourville et al., 2008). While some consistent patterns are emerging from this evidence, it is clear that more work in this area is needed with developmental samples in particular, in order to tease apart differences related to symptom onset from those related to compensatory strategies that develop with experience of stuttering

    Cerebellar tDCS Dissociates the Timing of Perceptual Decisions from Perceptual Change in Speech

    Get PDF
    Neuroimaging studies suggest that the cerebellum might play a role in both speech perception and speech perceptual learning. However, it remains unclear what this role is: does the cerebellum directly contribute to the perceptual decision? Or does it contribute to the timing of perceptual decisions? To test this, we applied transcranial direct current stimulation (tDCS) to the right cerebellum during a speech perception task. Participants experienced a series of speech perceptual tests designed to measure and then manipulate their perception of a phonetic contrast. One group received cerebellar tDCS during speech perceptual learning and a different group received "sham" tDCS during the same task. Both groups showed similar learning-related changes in speech perception that transferred to a different phonetic contrast. For both trained and untrained speech perceptual decisions, cerebellar tDCS significantly increased the time it took participants to indicate their decisions with a keyboard press. The results suggest that cerebellar tDCS disrupted the timing of perceptual decisions, while leaving the eventual decision unaltered. In support of this conclusion, we use the drift diffusion model to decompose the data into processes that determine the outcome of perceptual decision-making and those that do not. The modeling suggests that cerebellar tDCS disrupted processes unrelated to decision-making. Taken together, the empirical data and modeling demonstrate that right cerebellar tDCS dissociates the timing of perceptual decisions from perceptual change. The results provide initial evidence in healthy humans that the cerebellum critically contributes to speech timing in the perceptual domain

    Reorganization of the Neurobiology of Language After Sentence Overlearning

    Get PDF
    It is assumed that there are a static set of “language regions” in the brain. Yet, language comprehension engages regions well beyond these, and patients regularly produce familiar “formulaic” expressions when language regions are severely damaged. These suggest that the neurobiology of language is not fixed but varies with experiences, like the extent of word sequence learning. We hypothesized that perceiving overlearned sentences is supported by speech production and not putative language regions. Participants underwent 2 sessions of behavioral testing and functional magnetic resonance imaging (fMRI). During the intervening 15 days, they repeated 2 sentences 30 times each, twice a day. In both fMRI sessions, they “passively” listened to those sentences, novel sentences, and produced sentences. Behaviorally, evidence for overlearning included a 2.1-s decrease in reaction times to predict the final word in overlearned sentences. This corresponded to the recruitment of sensorimotor regions involved in sentence production, inactivation of temporal and inferior frontal regions involved in novel sentence listening, and a 45% change in global network organization. Thus, there was a profound whole-brain reorganization following sentence overlearning, out of “language” and into sensorimotor regions. The latter are generally preserved in aphasia and Alzheimer’s disease, perhaps explaining residual abilities with formulaic expressions in both

    How is precision regulated in maintaining trunk posture?

    Get PDF
    Precision of limb control is associated with increased joint stiffness caused by antagonistic co-activation. The aim of this study was to examine whether this strategy also applies to precision of trunk postural control. To this end, thirteen subjects performed static postural tasks, aiming at a target object with a cursor that responded to 2D trunk angles. By manipulating target dimensions, different levels of precision were imposed in the frontal and sagittal planes. Trunk angle and electromyography (EMG) of abdominal and back muscles were recorded. Repeated measures ANOVAs revealed significant effects of target dimensions on kinematic variability in both movement planes. Specifically, standard deviation (SD) of trunk angle decreased significantly when target size in the same direction decreased, regardless of the precision demands in the other direction. Thus, precision control of trunk posture was directionally specific. However, no consistent effect of precision demands was found on trunk muscle activity, when averaged over time series. Therefore, it was concluded that stiffness regulation by antagonistic co-activation was not used to meet increased precision demands in trunk postural control. Instead, results from additional analyses suggest that precision of trunk angle was controlled in a feedback mode

    Compensation for Changing Motor Uncertainty

    Get PDF
    When movement outcome differs consistently from the intended movement, errors are used to correct subsequent movements (e.g., adaptation to displacing prisms or force fields) by updating an internal model of motor and/or sensory systems. Here, we examine changes to an internal model of the motor system under changes in the variance structure of movement errors lacking an overall bias. We introduced a horizontal visuomotor perturbation to change the statistical distribution of movement errors anisotropically, while monetary gains/losses were awarded based on movement outcomes. We derive predictions for simulated movement planners, each differing in its internal model of the motor system. We find that humans optimally respond to the overall change in error magnitude, but ignore the anisotropy of the error distribution. Through comparison with simulated movement planners, we found that aimpoints corresponded quantitatively to an ideal movement planner that updates a strictly isotropic (circular) internal model of the error distribution. Aimpoints were planned in a manner that ignored the direction-dependence of error magnitudes, despite the continuous availability of unambiguous information regarding the anisotropic distribution of actual motor errors

    Non-hexagonal neural dynamics in vowel space

    Get PDF
    Are the grid cells discovered in rodents relevant to human cognition? Following up on two seminal studies by others, we aimed to check whether an approximate 6-fold, grid-like symmetry shows up in the cortical activity of humans who "navigate" between vowels, given that vowel space can be approximated with a continuous trapezoidal 2D manifold, spanned by the first and second formant frequencies. We created 30 vowel trajectories in the assumedly flat central portion of the trapezoid. Each of these trajectories had a duration of 240 milliseconds, with a steady start and end point on the perimeter of a "wheel". We hypothesized that if the neural representation of this "box" is similar to that of rodent grid units, there should be an at least partial hexagonal (6-fold) symmetry in the EEG response of participants who navigate it. We have not found any dominant n-fold symmetry, however, but instead, using PCAs, we find indications that the vowel representation may reflect phonetic features, as positioned on the vowel manifold. The suggestion, therefore, is that vowels are encoded in relation to their salient sensory-perceptual variables, and are not assigned to arbitrary gridlike abstract maps. Finally, we explored the relationship between the first PCA eigenvector and putative vowel attractors for native Italian speakers, who served as the subjects in our study

    Cognitive neuroscience: the neural basis of motor learning by observing

    No full text
    Somatosensory feedback from the limbs plays an essential role when we learn to make new movements. A recent study shows that motor learning can be accomplished purely through observation, and motor learning by observing also critically depends on the brain's somatosensory system

    The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception

    No full text
    Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires

    The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception

    No full text
    Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires
    corecore