9,485 research outputs found

    Social-sparsity brain decoders: faster spatial sparsity

    Get PDF
    Spatially-sparse predictors are good models for brain decoding: they give accurate predictions and their weight maps are interpretable as they focus on a small number of regions. However, the state of the art, based on total variation or graph-net, is computationally costly. Here we introduce sparsity in the local neighborhood of each voxel with social-sparsity, a structured shrinkage operator. We find that, on brain imaging classification problems, social-sparsity performs almost as well as total-variation models and better than graph-net, for a fraction of the computational cost. It also very clearly outlines predictive regions. We give details of the model and the algorithm.Comment: in Pattern Recognition in NeuroImaging, Jun 2016, Trento, Italy. 201

    Geometric Convolutional Neural Network for Analyzing Surface-Based Neuroimaging Data

    Full text link
    The conventional CNN, widely used for two-dimensional images, however, is not directly applicable to non-regular geometric surface, such as a cortical thickness. We propose Geometric CNN (gCNN) that deals with data representation over a spherical surface and renders pattern recognition in a multi-shell mesh structure. The classification accuracy for sex was significantly higher than that of SVM and image based CNN. It only uses MRI thickness data to classify gender but this method can expand to classify disease from other MRI or fMRI dataComment: 29 page

    Reading the mind's eye: Decoding category information during mental imagery

    Get PDF
    Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral–temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom–up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral–temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of “diagnostic voxels” (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral–temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom–up input, cortical back projections can selectively re-activate specific patterns of neural activity
    • 

    corecore