23 research outputs found

    Generic decoding of seen and imagined objects using hierarchical visual features

    Get PDF
    Object recognition is a key function in both human and machine vision. While recent studies have achieved fMRI decoding of seen and imagined contents, the prediction is limited to training examples. We present a decoding approach for arbitrary objects, using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features including those from a convolutional neural network can be predicted from fMRI patterns and that greater accuracy is achieved for low/high-level features with lower/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, the decoding of imagined objects reveals progressive recruitment of higher to lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval

    Attention modulates neural representation to render reconstructions according to subjective appearance

    Get PDF
    Stimulus images can be reconstructed from visual cortical activity. However, our perception of stimuli is shaped by both stimulus-induced and top-down processes, and it is unclear whether and how reconstructions reflect top-down aspects of perception. Here, we investigate the effect of attention on reconstructions using fMRI activity measured while subjects attend to one of two superimposed images. A state-of-the-art method is used for image reconstruction, in which brain activity is translated (decoded) to deep neural network (DNN) features of hierarchical layers then to an image. Reconstructions resemble the attended rather than unattended images. They can be modeled by superimposed images with biased contrasts, comparable to the appearance during attention. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses, modulating neural representations to render reconstructions in accordance with subjective appearance

    Characterization of deep neural network features by decodability from human brain activity

    Get PDF
    Achievements of near human-level performance in object recognition by deep neural networks (DNNs) have triggered a flood of comparative studies between the brain and DNNs. Using a DNN as a proxy for hierarchical visual representations, our recent study found that human brain activity patterns measured by functional magnetic resonance imaging (fMRI) can be decoded (translated) into DNN feature values given the same inputs. However, not all DNN features are equally decoded, indicating a gap between the DNN and human vision. Here, we present a dataset derived from DNN feature decoding analyses, which includes fMRI signals of five human subjects during image viewing, decoded feature values of DNNs (AlexNet and VGG19), and decoding accuracies of individual DNN features with their rankings. The decoding accuracies of individual features were highly correlated between subjects, suggesting the systematic differences between the brain and DNNs. We hope the present dataset will contribute to revealing the gap between the brain and DNNs and provide an opportunity to make use of the decoded features for further applications

    Inter-individual deep image reconstruction via hierarchical neural code conversion

    Get PDF
    The sensory cortex is characterized by general organizational principles such as topography and hierarchy. However, measured brain activity given identical input exhibits substantially different patterns across individuals. Although anatomical and functional alignment methods have been proposed in functional magnetic resonance imaging (fMRI) studies, it remains unclear whether and how hierarchical and fine-grained representations can be converted between individuals while preserving the encoded perceptual content. In this study, we trained a method of functional alignment called neural code converter that predicts a target subject’s brain activity pattern from a source subject given the same stimulus, and analyzed the converted patterns by decoding hierarchical visual features and reconstructing perceived images. The converters were trained on fMRI responses to identical sets of natural images presented to pairs of individuals, using the voxels on the visual cortex that covers from V1 through the ventral object areas without explicit labels of the visual areas. We decoded the converted brain activity patterns into the hierarchical visual features of a deep neural network using decoders pre-trained on the target subject and then reconstructed images via the decoded features. Without explicit information about the visual cortical hierarchy, the converters automatically learned the correspondence between visual areas of the same levels. Deep neural network feature decoding at each layer showed higher decoding accuracies from corresponding levels of visual areas, indicating that hierarchical representations were preserved after conversion. The visual images were reconstructed with recognizable silhouettes of objects even with relatively small numbers of data for converter training. The decoders trained on pooled data from multiple individuals through conversions led to a slight improvement over those trained on a single individual. These results demonstrate that the hierarchical and fine-grained representation can be converted by functional alignment, while preserving sufficient visual information to enable inter-individual visual image reconstruction

    Circadian Gene Circuitry Predicts Hyperactive Behavior in a Mood Disorder Mouse Model

    Get PDF
    SummaryBipolar disorder, also known as manic-depressive illness, causes swings in mood and activity levels at irregular intervals. Such changes are difficult to predict, and their molecular basis remains unknown. Here, we use infradian (longer than a day) cyclic activity levels in αCaMKII (Camk2a) mutant mice as a proxy for such mood-associated changes. We report that gene-expression patterns in the hippocampal dentate gyrus could retrospectively predict whether the mice were in a state of high or low locomotor activity (LA). Expression of a subset of circadian genes, as well as levels of cAMP and pCREB, possible upstream regulators of circadian genes, were correlated with LA states, suggesting that the intrinsic molecular circuitry changes concomitant with infradian oscillatory LA. Taken together, these findings shed light onto the molecular basis of how irregular biological rhythms and behavior are controlled by the brain

    Reconstructing visual illusory experiences from human brain activity

    Get PDF
    Visual illusions provide valuable insights into the brain’s interpretation of the world given sensory inputs. However, the precise manner in which brain activity translates into illusory experiences remains largely unknown. Here, we leverage a brain decoding technique combined with deep neural network (DNN) representations to reconstruct illusory percepts as images from brain activity. The reconstruction model was trained on natural images to establish a link between brain activity and perceptual features and then tested on two types of illusions: illusory lines and neon color spreading. Reconstructions revealed lines and colors consistent with illusory experiences, which varied across the source visual cortical areas. This framework offers a way to materialize subjective experiences, shedding light on the brain’s internal representations of the world

    End-to-End Deep Image Reconstruction From Human Brain Activity

    Get PDF
    Deep neural networks (DNNs) have recently been applied successfully to brain decoding and image reconstruction from functional magnetic resonance imaging (fMRI) activity. However, direct training of a DNN with fMRI data is often avoided because the size of available data is thought to be insufficient for training a complex network with numerous parameters. Instead, a pre-trained DNN usually serves as a proxy for hierarchical visual representations, and fMRI data are used to decode individual DNN features of a stimulus image using a simple linear model, which are then passed to a reconstruction module. Here, we directly trained a DNN model with fMRI data and the corresponding stimulus images to build an end-to-end reconstruction model. We accomplished this by training a generative adversarial network with an additional loss term that was defined in high-level feature space (feature loss) using up to 6,000 training data samples (natural images and fMRI responses). The above model was tested on independent datasets and directly reconstructed image using an fMRI pattern as the input. Reconstructions obtained from our proposed method resembled the test stimuli (natural and artificial images) and reconstruction accuracy increased as a function of training-data size. Ablation analyses indicated that the feature loss that we employed played a critical role in achieving accurate reconstruction. Our results show that the end-to-end model can learn a direct mapping between brain activity and perception

    CNVs in Three Psychiatric Disorders

    Get PDF
    BACKGROUND: We aimed to determine the similarities and differences in the roles of genic and regulatory copy number variations (CNVs) in bipolar disorder (BD), schizophrenia (SCZ), and autism spectrum disorder (ASD). METHODS: Based on high-resolution CNV data from 8708 Japanese samples, we performed to our knowledge the largest cross-disorder analysis of genic and regulatory CNVs in BD, SCZ, and ASD. RESULTS: In genic CNVs, we found an increased burden of smaller (500 kb) exonic CNVs in SCZ/ASD. Pathogenic CNVs linked to neurodevelopmental disorders were significantly associated with the risk for each disorder, but BD and SCZ/ASD differed in terms of the effect size (smaller in BD) and subtype distribution of CNVs linked to neurodevelopmental disorders. We identified 3 synaptic genes (DLG2, PCDH15, and ASTN2) as risk factors for BD. Whereas gene set analysis showed that BD-associated pathways were restricted to chromatin biology, SCZ and ASD involved more extensive and similar pathways. Nevertheless, a correlation analysis of gene set results indicated weak but significant pathway similarities between BD and SCZ or ASD (r = 0.25–0.31). In SCZ and ASD, but not BD, CNVs were significantly enriched in enhancers and promoters in brain tissue. CONCLUSIONS: BD and SCZ/ASD differ in terms of CNV burden, characteristics of CNVs linked to neurodevelopmental disorders, and regulatory CNVs. On the other hand, they have shared molecular mechanisms, including chromatin biology. The BD risk genes identified here could provide insight into the pathogenesis of BD

    Hierarchical Neural Representation of Dreamed Objects Revealed by Brain Decoding with Deep Neural Network Features

    Get PDF
    Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience

    シカクテキユメナイヨウノシンケイデコーディング

    No full text
    博第1176号甲第1176号博士(理学)奈良先端科学技術大学院大
    corecore