3,621 research outputs found

    A supervised clustering approach for fMRI-based inference of brain states

    Get PDF
    We propose a method that combines signals from many brain regions observed in functional Magnetic Resonance Imaging (fMRI) to predict the subject's behavior during a scanning session. Such predictions suffer from the huge number of brain regions sampled on the voxel grid of standard fMRI data sets: the curse of dimensionality. Dimensionality reduction is thus needed, but it is often performed using a univariate feature selection procedure, that handles neither the spatial structure of the images, nor the multivariate nature of the signal. By introducing a hierarchical clustering of the brain volume that incorporates connectivity constraints, we reduce the span of the possible spatial configurations to a single tree of nested regions tailored to the signal. We then prune the tree in a supervised setting, hence the name supervised clustering, in order to extract a parcellation (division of the volume) such that parcel-based signal averages best predict the target information. Dimensionality reduction is thus achieved by feature agglomeration, and the constructed features now provide a multi-scale representation of the signal. Comparisons with reference methods on both simulated and real data show that our approach yields higher prediction accuracy than standard voxel-based approaches. Moreover, the method infers an explicit weighting of the regions involved in the regression or classification task

    Sharing deep generative representation for perceived image reconstruction from human brain activity

    Full text link
    Decoding human brain activities via functional magnetic resonance imaging (fMRI) has gained increasing attention in recent years. While encouraging results have been reported in brain states classification tasks, reconstructing the details of human visual experience still remains difficult. Two main challenges that hinder the development of effective models are the perplexing fMRI measurement noise and the high dimensionality of limited data instances. Existing methods generally suffer from one or both of these issues and yield dissatisfactory results. In this paper, we tackle this problem by casting the reconstruction of visual stimulus as the Bayesian inference of missing view in a multiview latent variable model. Sharing a common latent representation, our joint generative model of external stimulus and brain response is not only "deep" in extracting nonlinear features from visual images, but also powerful in capturing correlations among voxel activities of fMRI recordings. The nonlinearity and deep structure endow our model with strong representation ability, while the correlations of voxel activities are critical for suppressing noise and improving prediction. We devise an efficient variational Bayesian method to infer the latent variables and the model parameters. To further improve the reconstruction accuracy, the latent representations of testing instances are enforced to be close to that of their neighbours from the training set via posterior regularization. Experiments on three fMRI recording datasets demonstrate that our approach can more accurately reconstruct visual stimuli

    Inter-subject neural code converter for visual image representation.

    Get PDF
    Brain activity patterns differ from person to person, even for an identical stimulus. In functional brain mapping studies, it is important to align brain activity patterns between subjects for group statistical analyses. While anatomical templates are widely used for inter-subject alignment in functional magnetic resonance imaging (fMRI) studies, they are not sufficient to identify the mapping between voxel-level functional responses representing specific mental contents. Recent work has suggested that statistical learning methods could be used to transform individual brain activity patterns into a common space while preserving representational contents. Here, we propose a flexible method for functional alignment, "neural code converter, " which converts one subject's brain activity pattern into another's representing the same content. The neural code converter was designed to learn statistical relationships between fMRI activity patterns of paired subjects obtained while they saw an identical series of stimuli. It predicts the signal intensity of individual voxels of one subject from a pattern of multiple voxels of the other subject. To test this method, we used fMRI activity patterns measured while subjects observed visual images consisting of random and structured patches. We show that fMRI activity patterns for visual images not used for training the converter could be predicted from those of another subject where brain activity was recorded for the same stimuli. This confirms that visual images can be accurately reconstructed from the predicted activity patterns alone. Furthermore, we show that a classifier trained only on predicted fMRI activity patterns could accurately classify measured fMRI activity patterns. These results demonstrate that the neural code converter can translate neural codes between subjects while preserving contents related to visual images. While this method is useful for functional alignment and decoding, it may also provide a basis for brain-to-brain communication using the converted pattern for designing brain stimulation
    corecore