159 research outputs found

    Spatio-temporal dynamics of face perception

    Get PDF
    The temporal and spatial neural processing of faces has been investigated rigorously, but few studies have unified these dimensions to reveal the spatio-temporal dynamics postulated by the models of face processing. We used support vector machine decoding and representational similarity analysis to combine information from different locations (fMRI), time windows (EEG), and theoretical models. By correlating representational dissimilarity matrices (RDMs) derived from multiple pairwise classifications of neural responses to different facial expressions (neutral, happy, fearful, angry), we found early EEG time windows (starting around 130 ​ms) to match fMRI data from primary visual cortex (V1), and later time windows (starting around 190 ​ms) to match data from lateral occipital, fusiform face complex, and temporal-parietal-occipital junction (TPOJ). According to model comparisons, the EEG classification results were based more on low-level visual features than expression intensities or categories. In fMRI, the model comparisons revealed change along the processing hierarchy, from low-level visual feature coding in V1 to coding of intensity of expressions in the right TPOJ. The results highlight the importance of a multimodal approach for understanding the functional roles of different brain regions in face processing.Peer reviewe

    Atlas-based classification algorithms for identification of informative brain regions in fMRI data

    Get PDF
    Multi-voxel pattern analysis (MVPA) has been successfully applied to neuroimaging data due to its larger sensitivity compared to univariate traditional techniques. Although a Searchlight strategy that locally sweeps all voxels in the brain is the most extended approach to assign functional value to different regions of the brain, this method does not offer information about the directionality of the results and it does not allow studying the combined patterns of more distant voxels. In the current study, we examined two different alternatives to searchlight. First, an atlas- based local averaging (ABLA, Schrouff et al., 2013a) method, which computes the relevance of each region of an atlas from the weights obtained by a whole-brain analysis. Second, a Multiple-Kernel Learning (MKL, Rakotomamonjy et al., 2008) approach, which combines different brain regions from an atlas to build a classification model. We evaluated their performance in two different scenarios where differential neural activity between conditions was large vs. small, and employed nine different atlases to assess the influence of diverse brain parcellations. Results show that all methods are able to localize informative regions when differences were large, demonstrating stability in the identification of regions across atlases. Moreover, the sign of the weights reported by these methods provides the sensitivity of multivariate approaches and the directionality of univariate methods. However, in the second context only ABLA localizes informative regions, which indicates that MKL leads to a lower performance when differences between conditions are small. Future studies could improve their results by employing machine learning algorithms to compute individual atlases fit to the brain organization of each participant.Spanish Ministry of Science and Innovation through grant PSI2016-78236-PSpanish Ministry of Economy and Competitiveness through grant BES-2014-06960

    Predictive decoding of neural data

    Get PDF
    In the last five decades the number of techniques available for non-invasive functional imaging has increased dramatically. Researchers today can choose from a variety of imaging modalities that include EEG, MEG, PET, SPECT, MRI, and fMRI. This doctoral dissertation offers a methodology for the reliable analysis of neural data at different levels of investigation. By using statistical learning algorithms the proposed approach allows single-trial analysis of various neural data by decoding them into variables of interest. Unbiased testing of the decoder on new samples of the data provides a generalization assessment of decoding performance reliability. Through consecutive analysis of the constructed decoder\u27s sensitivity it is possible to identify neural signal components relevant to the task of interest. The proposed methodology accounts for covariance and causality structures present in the signal. This feature makes it more powerful than conventional univariate methods which currently dominate the neuroscience field. Chapter 2 describes the generic approach toward the analysis of neural data using statistical learning algorithms. Chapter 3 presents an analysis of results from four neural data modalities: extracellular recordings, EEG, MEG, and fMRI. These examples demonstrate the ability of the approach to reveal neural data components which cannot be uncovered with conventional methods. A further extension of the methodology, Chapter 4 is used to analyze data from multiple neural data modalities: EEG and fMRI. The reliable mapping of data from one modality into the other provides a better understanding of the underlying neural processes. By allowing the spatial-temporal exploration of neural signals under loose modeling assumptions, it removes potential bias in the analysis of neural data due to otherwise possible forward model misspecification. The proposed methodology has been formalized into a free and open source Python framework for statistical learning based data analysis. This framework, PyMVPA, is described in Chapter 5

    Reliability and generalizability of similarity-based fusion of meg and fmri data in human ventral and dorsal visual streams

    Get PDF
    To build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. This method combined functional magnetic resonance imaging (fMRI) data with magnetoencephalography (MEG) data using representational similarity analysis and revealed a hierarchical progression from primary visual cortex through the dorsal and ventral streams. To assess the replicability of this method, we here present the results of a visual recognition neuro-imaging fusion experiment and compare them within and across experimental settings. We evaluated the reliability of this method by assessing the consistency of the results under similar test conditions, showing high agreement within participants. We then generalized these results to a separate group of individuals and visual input by comparing them to the fMRI-MEG fusion data of Cichy et al (2016), revealing a highly similar temporal progression recruiting both the dorsal and ventral streams. Together these results are a testament to the reproducibility of the fMRI-MEG fusion approach and allows for the interpretation of these spatiotemporal dynamic in a broader context

    The Unconscious Formation of Motor and Abstract Intentions

    Get PDF
    Three separate fMRI studies were conducted to study the neural dynamics of free decision formation. In Study 1, we first searched across the brain for spatiotemporal patterns that could predict the specific outcome and timing of free motor decisions to make a left or right button press (Soon et al., 2008). In Study 2, we replicated Study 1 using ultra-high field fMRI for improved temporal and spatial resolution to more accurately characterize the evolution of decision-predictive information in prefrontal cortex (Bode et al., 2011). In Study 3, to unequivocally dissociate high-level intentions from motor preparation and execution, we investigated the neural precursors of abstract intentions as participants spontaneously decided to perform either of two mental arithmetic tasks: addition or subtraction (Soon et al., 2013). Across the three studies, we consistently found that upcoming decisions could be predicted with ~60% accuracy from fine-grained spatial activation patterns occurring a few seconds before the decisions reached awareness, with very similar profiles for both motor and abstract intentions. The content and timing of the decisions appeared to be encoded in two functionally dissociable sets of regions: frontopolar and posterior cingulate/ precuneus cortex encoded the content but not the timing of the decisions, while the pre-supplementary motor area encoded the timing but not the content of the decisions. The choice-predictive regions in both motor and abstract decision tasks overlapped partially with the default mode network. High-resolution imaging in Study 2 further revealed that as the time-point of conscious decision approached, activity patterns in frontopolar cortex became increasingly stable with respect to the final choice.:Abstract 1 1. General Introduction 5 2. Study 1: Decoding the Unconscious Formation of Motor Intentions 21 3. Study 2: Temporal Stability of Neural Patterns Involved in Intention Formation 56 4. Study 3: Decoding the Unconscious Formation of Abstract Intentions 89 5. General Discussion 119 References 14

    Generative Embedding for Model-Based Classification of fMRI Data

    Get PDF
    Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI). The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs) rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in ‘hidden’ physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs) and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and correlation-based methods. This example demonstrates how disease states can be detected with very high accuracy and, at the same time, be interpreted mechanistically in terms of abnormalities in connectivity. We envisage that future applications of generative embedding may provide crucial advances in dissecting spectrum disorders into physiologically more well-defined subgroups

    Nonparametric statistical inference for functional brain information mapping

    Get PDF
    An ever-increasing number of functional magnetic resonance imaging (fMRI) studies are now using information-based multi-voxel pattern analysis (MVPA) techniques to decode mental states. In doing so, they achieve a significantly greater sensitivity compared to when they use univariate analysis frameworks. Two most prominent MVPA methods for information mapping are searchlight decoding and classifier weight mapping. The new MVPA brain mapping methods, however, have also posed new challenges for analysis and statistical inference on the group level. In this thesis, I discuss why the usual procedure of performing t-tests on MVPA derived information maps across subjects in order to produce a group statistic is inappropriate. I propose a fully nonparametric solution to this problem, which achieves higher sensitivity than the most commonly used t-based procedure. The proposed method is based on resampling methods and preserves the spatial dependencies in the MVPA-derived information maps. This enables to incorporate a cluster size control for the multiple testing problem. Using a volumetric searchlight decoding procedure and classifier weight maps, I demonstrate the validity and sensitivity of the new approach using both simulated and real fMRI data sets. In comparison to the standard t-test procedure implemented in SPM8, the new results showed a higher sensitivity and spatial specificity. The second goal of this thesis is the comparison of the two widely used information mapping approaches -- the searchlight technique and classifier weight mapping. Both methods take into account the spatially distributed patterns of activation in order to predict stimulus conditions, however the searchlight method solely operates on the local scale. The searchlight decoding technique has furthermore been found to be prone to spatial inaccuracies. For instance, the spatial extent of informative areas is generally exaggerated, and their spatial configuration is distorted. In this thesis, I compare searchlight decoding with linear classifier weight mapping, both using the formerly proposed non-parametric statistical framework using a simulation and ultra-high-field 7T experimental data. It was found that the searchlight method led to spatial inaccuracies that are especially noticeable in high-resolution fMRI data. In contrast, the weight mapping method was more spatially precise, revealing both informative anatomical structures as well as the direction by which voxels contribute to the classification. By maximizing the spatial accuracy of ultra-high-field fMRI results, such global multivariate methods provide a substantial improvement for characterizing structure-function relationships

    Aligning computer and human visual representations

    Get PDF
    Both computer vision and human visual system target the same goal: to accomplish visual tasks easily via a set of representations. In this thesis, we study to what extent representations from computer vision models align to human visual representations. To study this research question we used an interdisciplinary approach, integrating methods from psychology, neuroscience and computer vision. Such an approach is aimed to provide new insight in the understanding of human visual representations. In the four chapters of the thesis, we tested computer vision models against brain data obtained with electro-encephalography (EEG) and functional magnetic resonance imaging (fMRI). The main findings can be summarized as follows; 1) computer vision models with one or two computational stages correlate to visual representations of intermediate complexity in the human brain, 2) models with multiple computational stages correlate best to the hierarchy of representations in the human visual system, 3) computer vision models do not align one-to-one to the temporal hierarchy of representations in the visual cortex and 4) not only visual but also semantic representations correlate to representations in the human visual system

    Abstract neural representations of language during sentence comprehension: Evidence from MEG and Behaviour

    Get PDF
    • …
    corecore