23 research outputs found
The Automatic Neuroscientist: automated experimental design with real-time fMRI
A standard approach in functional neuroimaging explores how a particular
cognitive task activates a set of brain regions (one task-to-many regions
mapping). Importantly though, the same neural system can be activated by
inherently different tasks. To date, there is no approach available that
systematically explores whether and how distinct tasks probe the same neural
system (many tasks-to-region mapping). In our work, presented here we propose
an alternative framework, the Automatic Neuroscientist, which turns the typical
fMRI approach on its head. We use real-time fMRI in combination with
state-of-the-art optimisation techniques to automatically design the optimal
experiment to evoke a desired target brain state. Here, we present two
proof-of-principle studies involving visual and auditory stimuli. The data
demonstrate this closed-loop approach to be very powerful, hugely speeding up
fMRI and providing an accurate estimation of the underlying relationship
between stimuli and neural responses across an extensive experimental parameter
space. Finally, we detail four scenarios where our approach can be applied,
suggesting how it provides a novel description of how cognition and the brain
interrelate.Comment: 22 pages, 7 figures, work presented at OHBM 201
On staying grounded and avoiding Quixotic dead ends
The 15 articles in this special issue on The Representation of Concepts illustrate the rich variety of theoretical positions and supporting research that characterize the area. Although much agreement exists among contributors, much disagreement exists as well, especially about the roles of grounding and abstraction in conceptual processing. I first review theoretical approaches raised in these articles that I believe are Quixotic dead ends, namely, approaches that are principled and inspired but likely to fail. In the process, I review various theories of amodal symbols, their distortions of grounded theories, and fallacies in the evidence used to support them. Incorporating further contributions across articles, I then sketch a theoretical approach that I believe is likely to be successful, which includes grounding, abstraction, flexibility, explaining classic conceptual phenomena, and making contact with real-world situations. This account further proposes that (1) a key element of grounding is neural reuse, (2) abstraction takes the forms of multimodal compression, distilled abstraction, and distributed linguistic representation (but not amodal symbols), and (3) flexible context-dependent representations are a hallmark of conceptual processing
Speech Registration in Symptomatic Memory Impairment
Background: An inability to recall recent conversations often indicates impaired episodic memory retrieval. It may also reflect a failure of attentive registration of spoken sentences which leads to unsuccessful memory encoding. The hypothesis was that patients complaining of impaired memory would demonstrate impaired function of “multiple demand” (MD) brain regions, whose activation profile generalizes across cognitive domains, during speech registration in naturalistic listening conditions.Methods: Using functional MRI, brain activity was measured in 22 normal participants and 31 patients complaining of memory impairment, 21 of whom had possible or probable Alzheimer’s disease (AD). Participants heard a target speaker, either speaking alone or in the presence of distracting background speech, followed by a question to determine if the target speech had been registered.Results: Patients performed poorly at registering verbal information, which correlated with their scores on a screening test of cognitive impairment. Speech registration was associated with widely distributed activity in both auditory cortex and in MD cortex. Additional regions were most active when the target speech had to be separated from background speech. Activity in midline and lateral frontal MD cortex was reduced in the patients. A central cholinesterase inhibitor to increase brain acetylcholine levels in half the patients was not observed to alter brain activity or improve task performance at a second fMRI scan performed 6–11 weeks later. However, individual performances spontaneously fluctuated between the two scanning sessions, and these performance differences correlated with activity within a right hemisphere fronto-temporal system previously associated with sustained auditory attention.Conclusions: Midline and lateralized frontal regions that are engaged in task-dependent attention to, and registration of, verbal information are potential targets for transcranial brain stimulation to improve speech registration in neurodegenerative conditions
The neurobiology of speech perception decline in aging
Speech perception difficulties are common amongst elderlies; yet the underlying neural mechanisms are
still poorly understood. New empirical evidence suggesting that brain senescence may be an important
contributor to these difficulties have challenged the traditional view that peripheral hearing loss was the main
factor in the aetiology of these difficulties. Here we investigated the relationship between structural and
functional brain senescence and speech perception skills in aging. Following audiometric evaluations,
participants underwent MRI while performing a speech perception task at different intelligibility levels. As
expected, with age speech perception declined, even after controlling for hearing sensitivity using an
audiological measure (pure tone averages), and a bioacoustical measure (DPOAEs recordings). Our results
reveal that the core speech network, centered on the supratemporal cortex and ventral motor areas bilaterally,
decreased in spatial extent in older adults. Importantly, our results also show that speech skills in aging are
affected by changes in cortical thickness and in brain functioning. Age-independent intelligibility effects were
found in several motor and premotor areas, including the left ventral premotor cortex and the right SMA. Agedependent intelligibility effects were also found, mainly in sensorimotor cortical areas, and in the left dorsal
anterior insula. In this region, changes in BOLD signal had an effect on the relationship of age to speech
perception skills suggesting a role for this region in maintaining speech perception in older ages perhaps by.
These results provide important new insights into the neurobiology of speech perception in aging
Eye Movements during Auditory Attention Predict Individual Differences in Dorsal Attention Network Activity
The neural mechanisms supporting auditory attention are not fully understood. A dorsal frontoparietal network of brain regions is thought to mediate the spatial orienting of attention across all sensory modalities. Key parts of this network, the frontal eye fields (FEF) and the superior parietal lobes (SPL), contain retinotopic maps and elicit saccades when stimulated. This suggests that their recruitment during auditory attention might reflect crossmodal oculomotor processes; however this has not been confirmed experimentally. Here we investigate whether task-evoked eye movements during an auditory task can predict the magnitude of activity within the dorsal frontoparietal network. A spatial and non-spatial listening task was used with on-line eye-tracking and functional magnetic resonance imaging (fMRI). No visual stimuli or cues were used. The auditory task elicited systematic eye movements, with saccade rate and gaze position predicting attentional engagement and the cued sound location, respectively. Activity associated with these separate aspects of evoked eye-movements dissociated between the SPL and FEF. However these observed eye movements could not account for all the activation in the frontoparietal network. Our results suggest that the recruitment of the SPL and FEF during attentive listening reflects, at least partly, overt crossmodal oculomotor processes during non-visual attention. Further work is needed to establish whether the network’s remaining contribution to auditory attention is through covert crossmodal processes, or is directly involved in the manipulation of auditory information
Cerebral response to emotional working memory based on vocal cues: an fNIRS study
IntroductionHumans mainly utilize visual and auditory information as a cue to infer others’ emotions. Previous neuroimaging studies have shown the neural basis of memory processing based on facial expression, but few studies have examined it based on vocal cues. Thus, we aimed to investigate brain regions associated with emotional judgment based on vocal cues using an N-back task paradigm.MethodsThirty participants performed N-back tasks requiring them to judge emotion or gender from voices that contained both emotion and gender information. During these tasks, cerebral hemodynamic response was measured using functional near-infrared spectroscopy (fNIRS).ResultsThe results revealed that during the Emotion 2-back task there was significant activation in the frontal area, including the right precentral and inferior frontal gyri, possibly reflecting the function of an attentional network with auditory top-down processing. In addition, there was significant activation in the ventrolateral prefrontal cortex, which is known to be a major part of the working memory center.DiscussionThese results suggest that, compared to judging the gender of voice stimuli, when judging emotional information, attention is directed more deeply and demands for higher-order cognition, including working memory, are greater. We have revealed for the first time the specific neural basis for emotional judgments based on vocal cues compared to that for gender judgments based on vocal cues
Beatboxers and Guitarists Engage Sensorimotor Regions Selectively When Listening to the Instruments They can Play.
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians-beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal "how" pathway during auditory processing.This work was supported by the Wellcome Trust (Grant number WT090961MA awarded to S.K.S.)
Beatboxers and Guitarists Engage Sensorimotor Regions Selectively When Listening to the Instruments They can Play
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians-beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal "how" pathway during auditory processing
Comprehending auditory speech:previous and potential contributions of functional MRI
Functional neuroimaging revolutionised the study of human language in the late twentieth century, allowing researchers to investigate its underlying cognitive processes in the intact brain. Here, we review how functional MRI (fMRI) in particular has contributed to our understanding of speech comprehension, with a focus on studies of intelligibility. We highlight the use of carefully controlled acoustic stimuli to reveal the underlying hierarchical organisation of speech processing systems and cortical (a)symmetries, and discuss the contributions of novel design and analysis techniques to the contextualisation of perisylvian regions within wider speech processing networks. Within this, we outline the methodological challenges of fMRI as a technique for investigating speech and describe the innovations that have overcome or mitigated these difficulties. Focussing on multivariate approaches to fMRI, we highlight how these techniques have allowed both local neural representations and broader scale brain systems to be described