10,335 research outputs found
Sharing deep generative representation for perceived image reconstruction from human brain activity
Decoding human brain activities via functional magnetic resonance imaging
(fMRI) has gained increasing attention in recent years. While encouraging
results have been reported in brain states classification tasks, reconstructing
the details of human visual experience still remains difficult. Two main
challenges that hinder the development of effective models are the perplexing
fMRI measurement noise and the high dimensionality of limited data instances.
Existing methods generally suffer from one or both of these issues and yield
dissatisfactory results. In this paper, we tackle this problem by casting the
reconstruction of visual stimulus as the Bayesian inference of missing view in
a multiview latent variable model. Sharing a common latent representation, our
joint generative model of external stimulus and brain response is not only
"deep" in extracting nonlinear features from visual images, but also powerful
in capturing correlations among voxel activities of fMRI recordings. The
nonlinearity and deep structure endow our model with strong representation
ability, while the correlations of voxel activities are critical for
suppressing noise and improving prediction. We devise an efficient variational
Bayesian method to infer the latent variables and the model parameters. To
further improve the reconstruction accuracy, the latent representations of
testing instances are enforced to be close to that of their neighbours from the
training set via posterior regularization. Experiments on three fMRI recording
datasets demonstrate that our approach can more accurately reconstruct visual
stimuli
Double-Flow GAN model for the reconstruction of perceived faces from brain activities
Face plays an important role in human's visual perception, and reconstructing
perceived faces from brain activities is challenging because of its difficulty
in extracting high-level features and maintaining consistency of multiple face
attributes, such as expression, identity, gender, etc. In this study, we
proposed a novel reconstruction framework, which we called Double-Flow GAN,
that can enhance the capability of discriminator and handle imbalances in
images from certain domains that are too easy for generators. We also designed
a pretraining process that uses features extracted from images as conditions
for making it possible to pretrain the conditional reconstruction model from
fMRI in a larger pure image dataset. Moreover, we developed a simple pretrained
model to perform fMRI alignment to alleviate the problem of cross-subject
reconstruction due to the variations of brain structure among different
subjects. We conducted experiments by using our proposed method and
state-of-the-art reconstruction models. Our results demonstrated that our
method showed significant reconstruction performance, outperformed the previous
reconstruction models, and exhibited a good generation ability
Constraint-free Natural Image Reconstruction from fMRI Signals Based on Convolutional Neural Network
In recent years, research on decoding brain activity based on functional
magnetic resonance imaging (fMRI) has made remarkable achievements. However,
constraint-free natural image reconstruction from brain activity is still a
challenge. The existing methods simplified the problem by using semantic prior
information or just reconstructing simple images such as letters and digitals.
Without semantic prior information, we present a novel method to reconstruct
nature images from fMRI signals of human visual cortex based on the computation
model of convolutional neural network (CNN). Firstly, we extracted the units
output of viewed natural images in each layer of a pre-trained CNN as CNN
features. Secondly, we transformed image reconstruction from fMRI signals into
the problem of CNN feature visualizations by training a sparse linear
regression to map from the fMRI patterns to CNN features. By iteratively
optimization to find the matched image, whose CNN unit features become most
similar to those predicted from the brain activity, we finally achieved the
promising results for the challenging constraint-free natural image
reconstruction. As there was no use of semantic prior information of the
stimuli when training decoding model, any category of images (not constraint by
the training set) could be reconstructed theoretically. We found that the
reconstructed images resembled the natural stimuli, especially in position and
shape. The experimental results suggest that hierarchical visual features can
effectively express the visual perception process of human brain
- …