166 research outputs found
Predictive decoding of neural data
In the last five decades the number of techniques available for non-invasive functional imaging has increased dramatically. Researchers today can choose from a variety of imaging modalities that include EEG, MEG, PET, SPECT, MRI, and fMRI.
This doctoral dissertation offers a methodology for the reliable analysis of neural data at different levels of investigation. By using statistical learning algorithms the proposed approach allows single-trial analysis of various neural data by decoding them into variables of interest. Unbiased testing of the decoder on new samples of the data provides a generalization assessment of decoding performance reliability. Through consecutive analysis of the constructed decoder\u27s sensitivity it is possible to identify neural signal components relevant to the task of interest. The proposed methodology accounts for covariance and causality structures present in the signal. This feature makes it more powerful than conventional univariate methods which currently dominate the neuroscience field.
Chapter 2 describes the generic approach toward the analysis of neural data using statistical learning algorithms. Chapter 3 presents an analysis of results from four neural data modalities: extracellular recordings, EEG, MEG, and fMRI. These examples demonstrate the ability of the approach to reveal neural data components which cannot be uncovered with conventional methods.
A further extension of the methodology, Chapter 4 is used to analyze data from multiple neural data modalities: EEG and fMRI. The reliable mapping of data from one modality into the other provides a better understanding of the underlying neural processes. By allowing the spatial-temporal exploration of neural signals under loose modeling assumptions, it removes potential bias in the analysis of neural data due to otherwise possible forward model misspecification.
The proposed methodology has been formalized into a free and open source Python framework for statistical learning based data analysis. This framework, PyMVPA, is described in Chapter 5
Recommended from our members
Machine Learning Methods for Fusion and Inference of Simultaneous EEG and fMRI
Simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) have gained increasing popularity in studying human cognition due to their potential to map the brain dynamics with high spatial and temporal fidelity. Such detailed mapping of the brain is crucial for understanding the neural mechanisms by which humans make perceptual decisions. Despite recent advances in data acquisition and analysis of simultaneous EEG-fMRI, the lack of effective computational tools for optimal fusion of the two modalities remains a major challenge. The goal of this dissertation is to provide a recipe of machine learning methods for fusion of simultaneous EEG-fMRI data. Specifically, we investigate three types of fusion approaches and apply them to study the whole-brain spatiotemporal dynamics during a rapid object recognition task where subjects discriminate face, car, and house images under ambiguity. We first use an asymmetric fusion approach capitalizing on temporal single-trial EEG variability to identify early and late neural subsystems selective to categorical choice of faces versus nonfaces. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. Based on a computational modeling of behavioral measures, we further dissociate separate neural correlates of the face decision bias modulated by varying levels of stimulus evidence. Secondly, we develop a state-space model based symmetric fusion approach to integrate EEG and fMRI in a probabilistic generative framework. We use a variational Bayesian method to infer the network connectivity among latent neural states shared by EEG and fMRI. Finally, we use a data-driven symmetric fusion approach to compare representations of the EEG and fMRI against those of a deep convolutional neural network (CNN) in a common similarity space. We show a spatiotemporal hierarchical correspondence in visual processing stages between the human brain and the CNN. Collectively, our results show that the spatiotemporal properties of neural circuits revealed by the analysis of simultaneous EEG-fMRI data can reflect the choice behavior of subjects during rapid perceptual decision making
Modern Views of Machine Learning for Precision Psychiatry
In light of the NIMH's Research Domain Criteria (RDoC), the advent of
functional neuroimaging, novel technologies and methods provide new
opportunities to develop precise and personalized prognosis and diagnosis of
mental disorders. Machine learning (ML) and artificial intelligence (AI)
technologies are playing an increasingly critical role in the new era of
precision psychiatry. Combining ML/AI with neuromodulation technologies can
potentially provide explainable solutions in clinical practice and effective
therapeutic treatment. Advanced wearable and mobile technologies also call for
the new role of ML/AI for digital phenotyping in mobile mental health. In this
review, we provide a comprehensive review of the ML methodologies and
applications by combining neuroimaging, neuromodulation, and advanced mobile
technologies in psychiatry practice. Additionally, we review the role of ML in
molecular phenotyping and cross-species biomarker identification in precision
psychiatry. We further discuss explainable AI (XAI) and causality testing in a
closed-human-in-the-loop manner, and highlight the ML potential in multimedia
information extraction and multimodal data fusion. Finally, we discuss
conceptual and practical challenges in precision psychiatry and highlight ML
opportunities in future research
a methodological approach
In natural environments, visual and auditory stimulation elicit responses
across a large set of brain regions in a fraction of a second, yielding
representations of the multimodal scene and its properties. The rapid and
complex neural dynamics underlying visual and auditory information processing
pose major challenges to human cognitive neuroscience. Brain signals measured
non-invasively are inherently noisy, the format of neural representations is
unknown, and transformations between representations are complex and often
nonlinear. Further, no single non-invasive brain measurement technique
provides a spatio-temporally integrated view. In this opinion piece, we argue
that progress can be made by a concerted effort based on three pillars of
recent methodological development: (i) sensitive analysis techniques such as
decoding and cross-classification, (ii) complex computational modelling using
models such as deep neural networks, and (iii) integration across imaging
methods (magnetoencephalography/electroencephalography, functional magnetic
resonance imaging) and models, e.g. using representational similarity
analysis. We showcase two recent efforts that have been undertaken in this
spirit and provide novel results about visual and auditory scene analysis.
Finally, we discuss the limits of this perspective and sketch a concrete
roadmap for future research
Computational Mechanisms of Face Perception
The intertwined history of artificial intelligence and neuroscience has significantly impacted their development, with AI arising from and evolving alongside neuroscience. The remarkable performance of deep learning has inspired neuroscientists to investigate and utilize artificial neural networks as computational models to address biological issues. Studying the brain and its operational mechanisms can greatly enhance our understanding of neural networks, which has crucial implications for developing efficient AI algorithms. Many of the advanced perceptual and cognitive skills of biological systems are now possible to achieve through artificial intelligence systems, which is transforming our knowledge of brain function. Thus, the need for collaboration between the two disciplines demands emphasis. It\u27s both intriguing and challenging to study the brain using computer science approaches, and this dissertation centers on exploring computational mechanisms related to face perception.
Face recognition, being the most active artificial intelligence research area, offers a wealth of data resources as well as a mature algorithm framework. From the perspective of neuroscience, face recognition is an important indicator of social cognitive formation and neural development. The ability to recognize faces is one of the most important cognitive functions. We first discuss the problem of how the brain encodes different face identities. By using DNNs to extract features from complex natural face images and project them into the feature space constructed by dimension reduction, we reveal a new face code in the human medial temporal lobe (MTL), where neurons encode visually similar identities. On this basis, we discover a subset of DNN units that are selective for facial identity. These identity-selective units exhibit a general ability to discriminate novel faces. By establishing coding similarities with real primate neurons, our study provides an important approach to understanding primate facial coding. Lastly, we discuss the impact of face learning during the critical period. We identify a critical period during DNN training and systematically discuss the use of facial information by the neural network both inside and outside the critical period. We further provide a computational explanation for the critical period influencing face learning through learning rate changes. In addition, we show an alternative method to partially recover the model outside the critical period by knowledge refinement and attention shifting.
Our current research not only highlights the importance of training orientation and visual experience in shaping neural responses to face features and reveals potential mechanisms for face recognition but also provides a practical set of ideas to test hypotheses and reconcile previous findings in neuroscience using computer methods
Machine Learning Methods with Noisy, Incomplete or Small Datasets
In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios
Aligning computer and human visual representations
Both computer vision and human visual system target the same goal: to accomplish visual tasks easily via a set of representations. In this thesis, we study to what extent representations from computer vision models align to human visual representations. To study this research question we used an interdisciplinary approach, integrating methods from psychology, neuroscience and computer vision. Such an approach is aimed to provide new insight in the understanding of human visual representations. In the four chapters of the thesis, we tested computer vision models against brain data obtained with electro-encephalography (EEG) and functional magnetic resonance imaging (fMRI). The main findings can be summarized as follows; 1) computer vision models with one or two computational stages correlate to visual representations of intermediate complexity in the human brain, 2) models with multiple computational stages correlate best to the hierarchy of representations in the human visual system, 3) computer vision models do not align one-to-one to the temporal hierarchy of representations in the visual cortex and 4) not only visual but also semantic representations correlate to representations in the human visual system
Early brain activity : Translations between bedside and laboratory
Neural activity is both a driver of brain development and a readout of developmental processes. Changes in neuronal activity are therefore both the cause and consequence of neurodevelopmental compromises. Here, we review the assessment of neuronal activities in both preclinical models and clinical situations. We focus on issues that require urgent translational research, the challenges and bottlenecks preventing translation of biomedical research into new clinical diagnostics or treatments, and possibilities to overcome these barriers. The key questions are (i) what can be measured in clinical settings versus animal experiments, (ii) how do measurements relate to particular stages of development, and (iii) how can we balance practical and ethical realities with methodological compromises in measurements and treatments.Peer reviewe
Neural processing of semantic content in movies
Naturalistic stimuli, such as movies, contain interacting, multimodal and semantic features and allow for free exploration through eye movements. The full extent of neural responses to features such as motion, film cuts and eye movement behavior has not been established. The main hypothesis of this thesis is that complex multimodal and semantic stimuli in naturalistic movies engage a widespread ensemble of locations across the entire brain. To address this question I analyzed simultaneous intracranial and eyetracking data from over 6,000 electrodes across 23 patients with intractable epilepsy. Responses to fast eye movements – saccades – and film cuts are widespread across the entire brain, while responses to motion are restricted to visual brain areas. Higher-order brain areas respond differentially to semantic and low-level changes across film cuts and saccades. Movies have also recently been used in combination with resting state scans to investigate the utility of functional connectivity as a potential biomarker for psychiatric disorders. Functional connectivity in fMRI data measured during resting state and movie conditions is reliable, subject-specific and related to phenotype. However, it is unclear whether functional connectivity of EEG also possesses these qualities, which are required for the clinical use of neural biomarkers. I hypothesize that functional connectivity networks measured in EEG data recorded during movie watching are a predictor of psychiatric phenotypes similar to functional connectivity of fMRI. I demonstrate that functional connectivity of EEG is reliable, subject-specific and related to phenotypes. However, the patterns of functional connectivity differ in EEG and fMRI, suggesting the measures capture complementary information. In summary, these results demonstrate that the semantic content in movies allows one to study neural processing in naturalistic settings. In addition, EEG functional connectivity recorded during resting state and movie condition is reliabe, subject-specific and related to phenotype
Relating Spontaneous Activity and Cognitive States via NeuroDynamic Modeling
Stimulus-free brain dynamics form the basis of current knowledge concerning functional integration and segregation within the human brain. These relationships are typically described in terms of resting-state brain networks—regions which spontaneously coactivate. However, despite the interest in the anatomical mechanisms and biobehavioral correlates of stimulus-free brain dynamics, little is known regarding the relation between spontaneous brain dynamics and task-evoked activity. In particular, no computational framework has been previously proposed to unite spontaneous and task dynamics under a single, data-driven model. Model development in this domain will provide new insight regarding the mechanisms by which exogeneous stimuli and intrinsic neural circuitry interact to shape human cognition. The current work bridges this gap by deriving and validating a new technique, termed Mesoscale Individualized NeuroDynamic (MINDy) modeling, to estimate large-scale neural population models for individual human subjects using resting-state fMRI. A combination of ground-truth simulations and test-retest data are used to demonstrate that the approach is robust to various forms of noise, motion, and data processing choices. The MINDy formalism is then extended to simultaneously estimating neural population models and the neurovascular coupling which gives rise to BOLD fMRI. In doing so, I develop and validate a new optimization framework for simultaneously estimating system states and parameters. Lastly, MINDy models derived from resting-state data are used to predict task-based activity and remove the effects of intrinsic dynamics. Removing the MINDy model predictions from task fMRI, enables separation of exogenously-driven components of activity from their indirect consequences (the model predictions). Results demonstrate that removing the predicted intrinsic dynamics improves detection of event-triggered and sustained responses across four cognitive tasks. Together, these findings validate the MINDy framework and demonstrate that MINDy models predict brain dynamics across contexts. These dynamics contribute to the variance of task-evoked brain activity between subjects. Removing the influence of intrinsic dynamics improves the estimation of task effects
- …