14 research outputs found

    Decline of auditory-motor speech processing in older adults with hearing loss

    Get PDF
    Older adults often experience difficulties in understanding speech, partly because of age-related hearing loss. In young adults, activity of the left articulatory motor cortex is enhanced and it interacts with the auditory cortex via the left-hemispheric dorsal stream during speech processing. Little is known about the effect of ageing and age-related hearing loss on this auditory-motor interaction and speech processing in the articulatory motor cortex. It has been proposed that up-regulation of the motor system during speech processing could compensate for hearing loss and auditory processing deficits in older adults. Alternatively, age-related auditory deficits could reduce and distort the input from the auditory cortex to the articulatory motor cortex, suppressing recruitment of the motor system during listening to speech. The aim of the present study was to investigate the effects of ageing and age-related hearing loss on the excitability of the tongue motor cortex during listening to spoken sentences using transcranial magnetic stimulation and electromyography. Our results show that the excitability of the tongue motor cortex was facilitated during listening to speech in young and older adults with normal hearing. This facilitation was significantly reduced in older adults with hearing loss. These findings suggest a decline of auditory-motor processing of speech in adults with age-related hearing loss

    MEG Source Imaging and Group Analysis Using VBMEG

    Get PDF
    Variational Bayesian Multimodal EncephaloGraphy (VBMEG) is a MATLAB toolbox that estimates distributed source currents from magnetoencephalography (MEG)/electroencephalography (EEG) data by integrating functional MRI (fMRI) (https://vbmeg.atr.jp/). VBMEG also estimates whole-brain connectome dynamics using anatomical connectivity derived from a diffusion MRI (dMRI). In this paper, we introduce the VBMEG toolbox and demonstrate its usefulness. By collaborating with VBMEG's tutorial page (https://vbmeg.atr.jp/docs/v2/static/vbmeg2_tutorial_neuromag.html), we show its full pipeline using an open dataset recorded by Wakeman and Henson (2015). We import the MEG data and preprocess them to estimate the source currents. From the estimated source currents, we perform a group analysis and examine the differences of current amplitudes between conditions by controlling the false discovery rate (FDR), which yields results consistent with previous studies. We highlight VBMEG's characteristics by comparing these results with those obtained by other source imaging methods: weighted minimum norm estimate (wMNE), dynamic statistical parametric mapping (dSPM), and linearly constrained minimum variance (LCMV) beamformer. We also estimate source currents from the EEG data and the whole-brain connectome dynamics from the MEG data and dMRI. The observed results indicate the reliability, characteristics, and usefulness of VBMEG

    On the context-dependent nature of the contribution of the ventral premotor cortex to speech perception

    Get PDF
    What is the nature of the interface between speech perception and production, where auditory and motor representations converge? One set of explanations suggests that during perception, the motor circuits involved in producing a perceived action are in some way enacting the action without actually causing movement (covert simulation) or sending along the motor information to be used to predict its sensory consequences (i.e., efference copy). Other accounts either reject entirely the involvement of motor representations in perception, or explain their role as being more supportive than integral, and not employing the identical circuits used in production. Using fMRI, we investigated whether there are brain regions that are conjointly active for both speech perception and production, and whether these regions are sensitive to articulatory (syllabic) complexity during both processes, which is predicted by a covert simulation account. A group of healthy young adults (1) observed a female speaker produce a set of familiar words (perception), and (2) observed and then repeated the words (production). There were two types of words, varying in articulatory complexity, as measured by the presence or absence of consonant clusters. The simple words contained no consonant cluster (e.g. “palace”), while the complex words contained one to three consonant clusters (e.g. “planet”). Results indicate that the left ventral premotor cortex (PMv) was significantly active during speech perception and speech production but that activation in this region was scaled to articulatory complexity only during speech production, revealing an incompletely specified efferent motor signal during speech perception. The right planum temporal (PT) was also active during speech perception and speech production, and activation in this region was scaled to articulatory complexity during both production and perception. These findings are discussed in the context of current theories of speech perception, with particular attention to accounts that include an explanatory role for mirror neurons

    The neurobiology of speech perception decline in aging

    Get PDF
    Speech perception difficulties are common amongst elderlies; yet the underlying neural mechanisms are still poorly understood. New empirical evidence suggesting that brain senescence may be an important contributor to these difficulties have challenged the traditional view that peripheral hearing loss was the main factor in the aetiology of these difficulties. Here we investigated the relationship between structural and functional brain senescence and speech perception skills in aging. Following audiometric evaluations, participants underwent MRI while performing a speech perception task at different intelligibility levels. As expected, with age speech perception declined, even after controlling for hearing sensitivity using an audiological measure (pure tone averages), and a bioacoustical measure (DPOAEs recordings). Our results reveal that the core speech network, centered on the supratemporal cortex and ventral motor areas bilaterally, decreased in spatial extent in older adults. Importantly, our results also show that speech skills in aging are affected by changes in cortical thickness and in brain functioning. Age-independent intelligibility effects were found in several motor and premotor areas, including the left ventral premotor cortex and the right SMA. Agedependent intelligibility effects were also found, mainly in sensorimotor cortical areas, and in the left dorsal anterior insula. In this region, changes in BOLD signal had an effect on the relationship of age to speech perception skills suggesting a role for this region in maintaining speech perception in older ages perhaps by. These results provide important new insights into the neurobiology of speech perception in aging

    The frontotemporal organization of the arcuate fasciculus and its relationship with speech perception in young and older amateur singers and non-singers

    Get PDF
    The ability to perceive speech in noise (SPiN) declines with age. Although the etiology of SPiN decline is not well understood, accumulating evidence suggests a role for the dorsal speech stream. While age-related decline within the dorsal speech stream would negatively affect SPiN performance, experience-induced neuroplastic changes within the dorsal speech stream could positively affect SPiN performance. Here, we investigated the relationship between SPiN performance and the structure of the arcuate fasciculus (AF), which forms the white matter scaffolding of the dorsal speech stream, in aging singers and non-singers. Forty-three non-singers and 41 singers aged 20 to 87 years old completed a hearing evaluation and a magnetic resonance imaging session that included High Angular Resolution Diffusion Imaging. The groups were matched for sex, age, education, handedness, cognitive level, and musical instrument experience. A subgroup of participants completed syllable discrimination in the noise task. The AF was divided into 10 segments to explore potential local specializations for SPiN. The results show that, in carefully matched groups of singers and nonsingers (a) myelin and/or axonal membrane deterioration within the bilateral frontotemporal AF segments are associated with SPiN difficulties in aging singers and non-singers; (b) the structure of the AF is different in singers and non-singers; (c) these differences are not associated with a benefit on SPiN performance for singers. This study clarifies the etiology of SPiN difficulties by supporting the hypothesis for the role of aging of the dorsal speech stream

    Beyond the arcuate fasciculus : consensus and controversy in the connectional anatomy of language

    Get PDF
    The growing consensus that language is distributed into large-scale cortical and subcortical networks has brought with it an increasing focus on the connectional anatomy of language, or how particular fibre pathways connect regions within the language network. Understanding connectivity of the language network could provide critical insights into function, but recent investigations using a variety of methodologies in both humans and non-human primates have provided conflicting accounts of pathways central to language. Some of the pathways classically considered language pathways, such as the arcuate fasciculus, are now argued to be domain-general rather than specialized, which represents a radical shift in perspective. Other pathways described in the non-human primate remain to be verified in humans. In this review, we examine the consensus and controversy in the study of fibre pathway connectivity for language. We focus on seven fibre pathways—the superior longitudinal fasciculus and arcuate fasciculus, the uncinate fasciculus, extreme capsule, middle longitudinal fasciculus, inferior longitudinal fasciculus and inferior fronto-occipital fasciculus—that have been proposed to support language in the human. We examine the methods in humans and non-human primate used to investigate the connectivity of these pathways, the historical context leading to the most current understanding of their anatomy, and the functional and clinical correlates of each pathway with reference to language. We conclude with a challenge for researchers and clinicians to establish a coherent framework within which fibre pathway connectivity can be systematically incorporated to the study of language

    Links between perception and production : examining the roles of motor and premotor cortices in understanding speech.

    Get PDF
    Speaking requires learning to map the relationships between oral movements and the resulting acoustical signal, which demands a close interaction between perceptual and motor systems. Though historically seen as distinct, the neural mechanisms controlling speech perception and production mechanisms are now conceptualized as largely interacting and possibly overlapping. This chapter charts the history of theoretical and empirical approaches to the interaction of perception and production, focusing on the Motor Theory of Speech Perception and its later revival within the field of cognitive neuroscience. Including insights from recent advances in neuroscience methods, as well as evidence from aging and patient populations, the chapter offers an up-to-date assessment of the question of how motor and premotor cortices contribute to speech perception

    Automatic Activation of Phonological Templates for Native but Not Nonnative Phonemes: An Investigation of the Temporal Dynamics of Mu Activation

    Get PDF
    Models of speech perception suggest a dorsal stream connecting the temporal and inferior parietal lobe with the inferior frontal gyrus. This stream is thought to involve an auditory-motor loop that translates acoustic information into motor/articulatory commands and is further influenced by decision making processes that involve maintenance of working memory or attention. Parsing out dorsal stream’s speech specific mechanisms from memory related ones in speech perception poses a complex problem. Here I argue that these processes may be disentangled from the viewpoint of the temporal dynamics of sensorimotor neural activation around a speech perception related event. Methods: Alpha (~10Hz) and beta (~20Hz) spectral components of the mu () rhythm, localized to sensorimotor regions, have been shown to index somatosensory and motor activity, respectively. In the present work, event related spectral perturbations (ERSP) of the EEG -rhythm were analyzed, while manipulating two factors: active/passive listening, and perception of native/nonnative phonemes. Active and passive speech perception tasks were used as indexes of memory load employed, while native and. nonnative perception were used as indexes of automatic top-down coding for sensory analysis. Results: Statistically significant differences were found in the oscillatory patterns of components between active and passive speech perception conditions with greater alpha and beta event related desynchronization (ERD) after stimuli offset in active speech perception. When compared to listening to noise, passive speech perception presented significantly (pFDR Conclusion: These findings suggest that neural processes within the dorsal auditory stream are functionally and automatically involved in speech perception mechanisms. While its early activity (shortly after stimuli onset) seems to be importantly involved with the instantiation of predictive motor/articulatory internal models that help constraining speech discrimination, its later activity (post-stimulus offset) seems essential in the maintenance of working memory processes

    Independent Component Analysis of Event-Related Electroencephalography During Speech and Non-Speech Discrimination: : Implications for the Sensorimotor ∆∞ Rhythm in Speech Processing

    Get PDF
    Background: The functional significance of sensorimotor integration in acoustic speech processing is unclear despite more than three decades of neuroimaging research. Constructivist theories have long speculated that listeners make predictions about articulatory goals functioning to weight sensory analysis toward expected acoustic features (e.g. analysis-by-synthesis; internal models). Direct-realist accounts posit that sensorimotor integration is achieved via a direct match between incoming acoustic cues and articulatory gestures. A method capable of favoring one account over the other requires an ongoing, high-temporal resolution measure of sensorimotor cortical activity prior to and following acoustic input. Although scalp-recorded electroencephalography (EEG) provides a measure of cortical activity on a millisecond time scale, it has low-spatial resolution due to the blurring or mixing of cortical signals on the scalp surface. Recently proposed solutions to the low-spatial resolution of EEG known as blind source separation algorithms (BSS) have made the identification of distinct cortical signals possible. The ” rhythm of the EEG is known to briefly suppress (i.e., decrease in spectral power) over the sensorimotor cortex during the performance, imagination, and observation of biological movements, suggesting that it may provide a sensitive index of sensorimotor integration during speech processing. Neuroimaging studies have traditionally investigated speech perception in two-forced choice designs in which participants discriminate between pairs of speech and nonspeech control stimuli. As such, this classical design was employed in the current dissertation work to address the following specific aims to: 1) isolate independent components with traditional EEG signatures within the dorsal sensorimotor stream network; 2) identify components with features of the sensorimotor ” rhythm and; 3) investigate changes in timefrequency activation of the ” rhythm relative to stimulus type, onset, and discriminability (i.e., perceptual performance). In light of constructivist predictions, it was hypothesized that the ” rhythm would show significant suppression for syllable stimuli prior to and following stimulus onset, with significant differences between correct discrimination trials and those discriminated at chance levels. Methods: The current study employed millisecond temporal resolution EEG to measure ongoing decreases and increases in spectral power (event-related spectral perturbations; ERSPs) prior to, during, and after the onset of acoustic speech and tone-sweep stimuli embedded in white-noise. Sixteen participants were asked to passively listen to or actively identify speech and tone signals in a two-force choice same/different discrimination task. To investigate the role of ERSPs in perceptual identification performance, high signal-to-noise ratios (SNRs) in which speech and tone identification was significantly better than chance (+4dB) and low SNRs in which performance was below chance (-6dB and -18dB) were compared to a baseline of passive noise. Independent component analysis (ICA) of the EEG was used to reduce artifact and source mixing due to volume conduction. Independent components were clustered using measure product methods and cortical source modeling, including spectra, scalp distribution, equivalent current dipole estimation (ECD), and standardized low-resolution tomography (sLORETA). Results: Data analysis revealed six component clusters consistent with a bilateral dorsal-stream sensorimotor network, including component clusters localized to the precentral and postcentral gyrus, cingulate cortex, supplemental motor area, and posterior temporal regions. Timefrequency analysis of the left and right lateralized ” component clusters revealed significant (pFDR\u3c.05) suppression in the traditional beta frequency range (13-30Hz) prior to, during, and following stimulus onset. No significant differences from baseline were found for passive listening conditions. Tone discrimination was different from passive noise in the time period following stimulus onset only. No significant differences were found for correct relative to chance tone stimuli. For both left and right lateralized clusters, early suppression (i.e., prior to stimulus onset) compared to the passive noise baseline was found for the syllable discrimination task only. Significant differences between correct trials and trials identified at chance level were found for the time period following stimulus offset for the syllable discrimination task in left lateralized cluster. Conclusions: As this is the first study to employ BSS methods to isolate components of the EEG during acoustic speech and non-speech discrimination, findings have important implications for the functional role of sensorimotor integration in speech processing. Consistent with expectations, the current study revealed component clusters associated with source models within the sensorimotor dorsal stream network. Beta suppression of the ” component clusters in both the left and right hemispheres is consistent with activity in the precentral gyrus prior to and following acoustic input. As early suppression of the ” was found prior the syllable discrimination task, the present findings favor internal model concepts of speech processing over mechanisms proposed by direct-realists. Significant differences between correct and chance syllable discrimination trials are also consistent with internal model concepts suggesting that sensorimotor integration is related to perceptual performance at the point in time when initial articulatory hypotheses are compared with acoustic input. The relatively inexpensive, noninvasive EEG methodology used in this study may have translational value in the future as a brain computer interface (BCI) approach. As deficits in sensorimotor integration are thought to underlie cognitive-communication impairments in a number of communication disorders, the development of neuromodulatory feedback approaches may provide a novel avenue for augmenting current therapeutic protocols

    Sensorimotor Modulations by Cognitive Processes During Accurate Speech Discrimination: An EEG Investigation of Dorsal Stream Processing

    Get PDF
    Internal models mediate the transmission of information between anterior and posterior regions of the dorsal stream in support of speech perception, though it remains unclear how this mechanism responds to cognitive processes in service of task demands. The purpose of the current study was to identify the influences of attention and working memory on sensorimotor activity across the dorsal stream during speech discrimination, with set size and signal clarity employed to modulate stimulus predictability and the time course of increased task demands, respectively. Independent Component Analysis of 64–channel EEG data identified bilateral sensorimotor mu and auditory alpha components from a cohort of 42 participants, indexing activity from anterior (mu) and posterior (auditory) aspects of the dorsal stream. Time frequency (ERSP) analysis evaluated task-related changes in focal activation patterns with phase coherence measures employed to track patterns of information flow across the dorsal stream. ERSP decomposition of mu clusters revealed event-related desynchronization (ERD) in beta and alpha bands, which were interpreted as evidence of forward (beta) and inverse (alpha) internal modeling across the time course of perception events. Stronger pre-stimulus mu alpha ERD in small set discrimination tasks was interpreted as more efficient attentional allocation due to the reduced sensory search space enabled by predictable stimuli. Mu-alpha and mu-beta ERD in peri- and post-stimulus periods were interpreted within the framework of Analysis by Synthesis as evidence of working memory activity for stimulus processing and maintenance, with weaker activity in degraded conditions suggesting that covert rehearsal mechanisms are sensitive to the quality of the stimulus being retained in working memory. Similar ERSP patterns across conditions despite the differences in stimulus predictability and clarity, suggest that subjects may have adapted to tasks. In light of this, future studies of sensorimotor processing should consider the ecological validity of the tasks employed, as well as the larger cognitive environment in which tasks are performed. The absence of interpretable patterns of mu-auditory coherence modulation across the time course of speech discrimination highlights the need for more sensitive analyses to probe dorsal stream connectivity
    corecore