229 research outputs found

    Machine Learning for Neuroimaging with Scikit-Learn

    Get PDF
    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g. multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g. resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.Comment: Frontiers in neuroscience, Frontiers Research Foundation, 2013, pp.1

    Inter-subject neural code converter for visual image representation.

    Get PDF
    Brain activity patterns differ from person to person, even for an identical stimulus. In functional brain mapping studies, it is important to align brain activity patterns between subjects for group statistical analyses. While anatomical templates are widely used for inter-subject alignment in functional magnetic resonance imaging (fMRI) studies, they are not sufficient to identify the mapping between voxel-level functional responses representing specific mental contents. Recent work has suggested that statistical learning methods could be used to transform individual brain activity patterns into a common space while preserving representational contents. Here, we propose a flexible method for functional alignment, "neural code converter, " which converts one subject's brain activity pattern into another's representing the same content. The neural code converter was designed to learn statistical relationships between fMRI activity patterns of paired subjects obtained while they saw an identical series of stimuli. It predicts the signal intensity of individual voxels of one subject from a pattern of multiple voxels of the other subject. To test this method, we used fMRI activity patterns measured while subjects observed visual images consisting of random and structured patches. We show that fMRI activity patterns for visual images not used for training the converter could be predicted from those of another subject where brain activity was recorded for the same stimuli. This confirms that visual images can be accurately reconstructed from the predicted activity patterns alone. Furthermore, we show that a classifier trained only on predicted fMRI activity patterns could accurately classify measured fMRI activity patterns. These results demonstrate that the neural code converter can translate neural codes between subjects while preserving contents related to visual images. While this method is useful for functional alignment and decoding, it may also provide a basis for brain-to-brain communication using the converted pattern for designing brain stimulation

    Brain decoding of the Human Connectome Project Tasks in a Dense Individual fMRI Dataset

    Full text link
    Les études de décodage cérébral visent à entrainer un modèle d'activité cérébrale qui reflète l'état cognitif du participant. Des variations interindividuelles substantielles dans l'organisation fonctionnelle du cerveau représentent un défi pour un décodage cérébral précis. Dans cette thèse, nous évaluons si des modèles de décodage cérébral précis peuvent être entrainés avec succès entièrement au niveau individuel. Nous avons utilisé un ensemble de données individuel dense d'imagerie par résonance magnétique fonctionnelle (IRMf) pour lequel six participants ont terminé l'ensemble de la batterie de tâches du “Human Connectome Project” > 13 fois sur dix sessions d'IRMf distinctes. Nous avons implémenté plusieurs méthodes de décodage, des simples machines à vecteurs de support aux réseaux complexes de neurones à convolution de graphes. Tous les décodeurs spécifiques à l'individu ont été entrainés pour classifier simultanément les volumes d'IRMf simples (TR = 1,49) entre 21 conditions expérimentales, en utilisant environ sept heures de données d'IRMf par participant. Les meilleurs résultats de prédiction ont été obtenus avec notre modèle de machine à vecteurs de support avec une précision de test allant de 64 à 79 % (niveau de la chance environ 7%). Les perceptrons multiniveaux et les réseaux convolutionnels de graphes ont également obtenu de très bons résultats (63-78% et 63-77%, respectivement). Les cartes d'importance des caractéristiques dérivées du meilleur modèle (SVM) ont révélé que la classification utilise des régions pertinentes pour des domaines cognitifs particuliers, sur la base d’a priori neuro-anatomique. En appliquant un modèle individuel aux données d’un autre sujet (classification inter-sujets), on observe une précision nettement inférieure à celle des modèles spécifiques au sujet, ce qui indique que les décodeurs cérébraux individuels ont appris des caractéristiques spécifiques à chaque individu. Nos résultats indiquent que des ensembles de données de neuroimagerie profonde peuvent être utilisés pour former des modèles de décodage cérébral précis au niveau individuel. Les données de cette étude sont partagées librement avec la communauté (https://cneuromod.ca), et pourront servir de benchmark de référence, pour l’entrainement de modèles de décodage cérébral individuel, ou bien des études de “transfert learning” à partir de l’échantillon collecté par le human connectome project.Brain decoding studies aim to train a pattern of brain activity that reflects the cognitive state of the participant. Substantial inter-individual variations in functional organization represent a challenge to accurate brain decoding. In this thesis, we assess whether accurate brain decoding models can be successfully trained entirely at the individual level. We used a dense individual functional magnetic resonance imaging (fMRI) dataset for which six participants completed the entire Human Connectome Project (HCP) task battery>13 times across ten separate fMRI sessions. We assessed several decoding methods, from simple support vector machines to complex graph convolution neural networks. All individual-specific decoders were trained to classify single fMRI volumes (TR = 1.49) between 21 experimental conditions simultaneously, using around seven hours of fMRI data per participant. The best prediction accuracy results were achieved with our support vector machine model with test accuracy ranging from 64 to 79% (chance level of about 7%). Multilevel perceptrons and graph convolutional networks also performed very well (63-78% and 63-77%, respectively). Best Model Derived Feature Importance Maps (SVM) revealed that the classification uses regions relevant to particular cognitive domains, based on neuroanatomical priors. Applying an individual model to another subject's data (across-subject classification) yields significantly lower accuracy than subject-specific models, indicating that individual brain decoders have learned characteristics specific to each individual. Our results indicate that deep neuroimaging datasets can be used to train accurate brain decoding models at the individual level. The data from this study is shared freely with the community (https://cneuromod.ca) and can be used as a reference benchmark, for training individual brain decoding models, or for “transfer learning” studies from the sample collected by the human connectome project

    Neural Encoding and Decoding with Deep Learning for Natural Vision

    Get PDF
    The overarching objective of this work is to bridge neuroscience and artificial intelligence to ultimately build machines that learn, act, and think like humans. In the context of vision, the brain enables humans to readily make sense of the visual world, e.g. recognizing visual objects. Developing human-like machines requires understanding the working principles underlying the human vision. In this dissertation, I ask how the brain encodes and represents dynamic visual information from the outside world, whether brain activity can be directly decoded to reconstruct and categorize what a person is seeing, and whether neuroscience theory can be applied to artificial models to advance computer vision. To address these questions, I used deep neural networks (DNN) to establish encoding and decoding models for describing the relationships between the brain and the visual stimuli. Using the DNN, the encoding models were able to predict the functional magnetic resonance imaging (fMRI) responses throughout the visual cortex given video stimuli; the decoding models were able to reconstruct and categorize the visual stimuli based on fMRI activity. To further advance the DNN model, I have implemented a new bidirectional and recurrent neural network based on the predictive coding theory. As a theory in neuroscience, predictive coding explains the interaction among feedforward, feedback, and recurrent connections. The results showed that this brain-inspired model significantly outperforms feedforward-only DNNs in object recognition. These studies have positive impact on understanding the neural computations under human vision and improving computer vision with the knowledge from neuroscience

    Constraint-free Natural Image Reconstruction from fMRI Signals Based on Convolutional Neural Network

    Full text link
    In recent years, research on decoding brain activity based on functional magnetic resonance imaging (fMRI) has made remarkable achievements. However, constraint-free natural image reconstruction from brain activity is still a challenge. The existing methods simplified the problem by using semantic prior information or just reconstructing simple images such as letters and digitals. Without semantic prior information, we present a novel method to reconstruct nature images from fMRI signals of human visual cortex based on the computation model of convolutional neural network (CNN). Firstly, we extracted the units output of viewed natural images in each layer of a pre-trained CNN as CNN features. Secondly, we transformed image reconstruction from fMRI signals into the problem of CNN feature visualizations by training a sparse linear regression to map from the fMRI patterns to CNN features. By iteratively optimization to find the matched image, whose CNN unit features become most similar to those predicted from the brain activity, we finally achieved the promising results for the challenging constraint-free natural image reconstruction. As there was no use of semantic prior information of the stimuli when training decoding model, any category of images (not constraint by the training set) could be reconstructed theoretically. We found that the reconstructed images resembled the natural stimuli, especially in position and shape. The experimental results suggest that hierarchical visual features can effectively express the visual perception process of human brain

    Interpreting Encoding and Decoding Models

    Get PDF
    Encoding and decoding models are widely used in systems, cognitive, and computational neuroscience to make sense of brain-activity data. However, the interpretation of their results requires care. Decoding models can help reveal whether particular information is present in a brain region in a format the decoder can exploit. Encoding models make comprehensive predictions about representational spaces. In the context of sensory systems, encoding models enable us to test and compare brain-computational models, and thus directly constrain computational theory. Encoding and decoding models typically include fitted linear-model components. Sometimes the weights of the fitted linear combinations are interpreted as reflecting, in an encoding model, the contribution of different sensory features to the representation or, in a decoding model, the contribution of different measured brain responses to a decoded feature. Such interpretations can be problematic when the predictor variables or their noise components are correlated and when priors (or penalties) are used to regularize the fit. Encoding and decoding models are evaluated in terms of their generalization performance. The correct interpretation depends on the level of generalization a model achieves (e.g. to new response measurements for the same stimuli, to new stimuli from the same population, or to stimuli from a different population). Significant decoding or encoding performance of a single model (at whatever level of generality) does not provide strong constraints for theory. Many models must be tested and inferentially compared for analyses to drive theoretical progress.Comment: 19 pages, 2 figures, author preprin

    Generating Text from Functional Brain Images

    Get PDF
    Recent work has shown that it is possible to take brain images acquired during viewing of a scene and reconstruct an approximation of the scene from those images. Here we show that it is also possible to generate text about the mental content reflected in brain images. We began with images collected as participants read names of concrete items (e.g., “Apartment’’) while also seeing line drawings of the item named. We built a model of the mental semantic representation of concrete concepts from text data and learned to map aspects of such representation to patterns of activation in the corresponding brain image. In order to validate this mapping, without accessing information about the items viewed for left-out individual brain images, we were able to generate from each one a collection of semantically pertinent words (e.g., “door,” “window” for “Apartment’’). Furthermore, we show that the ability to generate such words allows us to perform a classification task and thus validate our method quantitatively
    corecore