210 research outputs found

    Pattern classification of valence in depression

    Get PDF
    Copyright @ The authors, 2013. This is an open access article available under Creative Commons Licence, CC-BY-NC-ND 3.0.Neuroimaging biomarkers of depression have potential to aid diagnosis, identify individuals at risk and predict treatment response or course of illness. Nevertheless none have been identified so far, potentially because no single brain parameter captures the complexity of the pathophysiology of depression. Multi-voxel pattern analysis (MVPA) may overcome this issue as it can identify patterns of voxels that are spatially distributed across the brain. Here we present the results of an MVPA to investigate the neuronal patterns underlying passive viewing of positive, negative and neutral pictures in depressed patients. A linear support vector machine (SVM) was trained to discriminate different valence conditions based on the functional magnetic resonance imaging (fMRI) data of nine unipolar depressed patients. A similar dataset obtained in nine healthy individuals was included to conduct a group classification analysis via linear discriminant analysis (LDA). Accuracy scores of 86% or higher were obtained for each valence contrast via patterns that included limbic areas such as the amygdala and frontal areas such as the ventrolateral prefrontal cortex. The LDA identified two areas (the dorsomedial prefrontal cortex and caudate nucleus) that allowed group classification with 72.2% accuracy. Our preliminary findings suggest that MVPA can identify stable valence patterns, with more sensitivity than univariate analysis, in depressed participants and that it may be possible to discriminate between healthy and depressed individuals based on differences in the brain's response to emotional cues.This work was supported by a PhD studentship to I.H. from the National Institute for Social Care and Health Research (NISCHR) HS/10/25 and MRC grant G 1100629

    Phonatory and articulatory representations of speech production in cortical and subcortical fMRI responses

    Get PDF
    Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.Tis work was supported by the Spanish Ministry of Economy and Competitiveness through the Juan de la Cierva Fellowship (FJCI-2015-26814), and the Ramon y Cajal Fellowship (RYC-2017- 21845), the Spanish State Research Agency through the BCBL “Severo Ochoa” excellence accreditation (SEV-2015-490), the Basque Government (BERC 2018- 2021) and the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant (No 799554).info:eu-repo/semantics/publishedVersio

    Data Mining the Brain to Decode the Mind

    Get PDF
    In recent years, neuroscience has begun to transform itself into a “big data” enterprise with the importation of computational and statistical techniques from machine learning and informatics. In addition to their translational applications such as brain-computer interfaces and early diagnosis of neuropathology, these tools promise to advance new solutions to longstanding theoretical quandaries. Here I critically assess whether these promises will pay off, focusing on the application of multivariate pattern analysis (MVPA) to the problem of reverse inference. I argue that MVPA does not inherently provide a new answer to classical worries about reverse inference, and that the method faces pervasive interpretive problems of its own. Further, the epistemic setting of MVPA and other decoding methods contributes to a potentially worrisome shift towards prediction and away from explanation in fundamental neuroscience

    Multivariate pattern analysis of input and output representations of speech

    Get PDF
    Repeating a word or nonword requires a speaker to map auditory representations of incoming sounds onto learned speech items, maintain those items in short-term memory, interface that representation with the motor output system, and articulate the target sounds. This dissertation seeks to clarify the nature and neuroanatomical localization of speech sound representations in perception and production through multivariate analysis of neuroimaging data. The major portion of this dissertation describes two experiments using functional magnetic resonance imaging (fMRI) to measure responses to the perception and overt production of syllables and multivariate pattern analysis to localize brain areas containing associated phonological/phonetic information. The first experiment used a delayed repetition task to permit response estimation for auditory syllable presentation (input) and overt production (output) in individual trials. In input responses, clusters sensitive to vowel identity were found in left inferior frontal sulcus (IFs), while clusters responsive to syllable identity were found in left ventral premotor cortex and left mid superior temporal sulcus (STs). Output-linked responses revealed clusters of vowel information bilaterally in mid/posterior STs. The second experiment was designed to dissociate the phonological content of the auditory stimulus and vocal target. Subjects were visually presented with two (non)word syllables simultaneously, then aurally presented with one of the syllables. A visual cue informed subjects either to repeat the heard syllable (repeat trials) or produce the unheard, visually presented syllable (change trials). Results suggest both IFs and STs represent heard syllables; on change trials, representations in frontal areas, but not STs, are updated to reflect the vocal target. Vowel identity covaries with formant frequencies, inviting the question of whether lower-level, auditory representations can support vowel classification in fMRI. The final portion of this work describes a simulation study, in which artificial fMRI datasets were constructed to mimic the overall design of Experiment 1 with voxels assumed to contain either discrete (categorical) or analog (frequency-based) vowel representations. The accuracy of classification models was characterized by type of representation and the density and strength of responsive voxels. It was shown that classification is more sensitive to sparse, discrete representations than dense analog representations

    Amodal processing in human prefrontal cortex

    Get PDF
    Information enters the cortex via modality-specific sensory regions, whereas actions are produced by modality-specific motor regions. Intervening central stages of information processing map sensation to behavior. Humans perform this central processing in a flexible, abstract manner such that sensory information in any modality can lead to response via any motor system. Cognitive theories account for such flexible behavior by positing amodal central information processing (e. g., "central executive," Baddeley and Hitch, 1974; "supervisory attentional system," Norman and Shallice, 1986; "response selection bottleneck," Pashler, 1994). However, the extent to which brain regions embodying central mechanisms of information processing are amodal remains unclear. Here we apply multivariate pattern analysis to functional magnetic resonance imaging (fMRI) data to compare response selection, a cognitive process widely believed to recruit an amodal central resource across sensory and motor modalities. We show that most frontal and parietal cortical areas known to activate across a wide variety of tasks code modality, casting doubt on the notion that these regions embody a central processor devoid of modality representation. Importantly, regions of anterior insula and dorsolateral prefrontal cortex consistently failed to code modality across four experiments. However, these areas code at least one other task dimension, process (instantiated as response selection vs response execution), ensuring that failure to find coding of modality is not driven by insensitivity of multivariate pattern analysis in these regions. We conclude that abstract encoding of information modality is primarily a property of subregions of the prefrontal cortex

    Beyond motor scheme: a supramodal distributed representation in the action-observation network

    Get PDF
    The representation of actions within the action-observation network is thought to rely on a distributed functional organization. Furthermore, recent findings indicate that the action-observation network encodes not merely the observed motor act, but rather a representation that is independent from a specific sensory modality or sensory experience. In the present study, we wished to determine to what extent this distributed and ‘more abstract’ representation of action is truly supramodal, i.e. shares a common coding across sensory modalities. To this aim, a pattern recognition approach was employed to analyze neural responses in sighted and congenitally blind subjects during visual and/or auditory presentation of hand-made actions. Multivoxel pattern analyses-based classifiers discriminated action from non-action stimuli across sensory conditions (visual and auditory) and experimental groups (blind and sighted). Moreover, these classifiers labeled as ‘action’ the pattern of neural responses evoked during actual motor execution. Interestingly, discriminative information for the action/non action classification was located in a bilateral, but left-prevalent, network that strongly overlaps with brain regions known to form the action-observation network and the human mirror system. The ability to identify action features with a multivoxel pattern analyses-based classifier in both sighted and blind individuals and independently from the sensory modality conveying the stimuli clearly supports the hypothesis of a supramodal, distributed functional representation of actions, mainly within the action-observation network

    Atlas-based classification algorithms for identification of informative brain regions in fMRI data

    Get PDF
    Multi-voxel pattern analysis (MVPA) has been successfully applied to neuroimaging data due to its larger sensitivity compared to univariate traditional techniques. Although a Searchlight strategy that locally sweeps all voxels in the brain is the most extended approach to assign functional value to different regions of the brain, this method does not offer information about the directionality of the results and it does not allow studying the combined patterns of more distant voxels. In the current study, we examined two different alternatives to searchlight. First, an atlas- based local averaging (ABLA, Schrouff et al., 2013a) method, which computes the relevance of each region of an atlas from the weights obtained by a whole-brain analysis. Second, a Multiple-Kernel Learning (MKL, Rakotomamonjy et al., 2008) approach, which combines different brain regions from an atlas to build a classification model. We evaluated their performance in two different scenarios where differential neural activity between conditions was large vs. small, and employed nine different atlases to assess the influence of diverse brain parcellations. Results show that all methods are able to localize informative regions when differences were large, demonstrating stability in the identification of regions across atlases. Moreover, the sign of the weights reported by these methods provides the sensitivity of multivariate approaches and the directionality of univariate methods. However, in the second context only ABLA localizes informative regions, which indicates that MKL leads to a lower performance when differences between conditions are small. Future studies could improve their results by employing machine learning algorithms to compute individual atlases fit to the brain organization of each participant.Spanish Ministry of Science and Innovation through grant PSI2016-78236-PSpanish Ministry of Economy and Competitiveness through grant BES-2014-06960
    corecore