121,444 research outputs found

    Social-sparsity brain decoders: faster spatial sparsity

    Get PDF
    Spatially-sparse predictors are good models for brain decoding: they give accurate predictions and their weight maps are interpretable as they focus on a small number of regions. However, the state of the art, based on total variation or graph-net, is computationally costly. Here we introduce sparsity in the local neighborhood of each voxel with social-sparsity, a structured shrinkage operator. We find that, on brain imaging classification problems, social-sparsity performs almost as well as total-variation models and better than graph-net, for a fraction of the computational cost. It also very clearly outlines predictive regions. We give details of the model and the algorithm.Comment: in Pattern Recognition in NeuroImaging, Jun 2016, Trento, Italy. 201

    On sparsity averaging

    Get PDF
    Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013) introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted 1\ell_1 scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.Comment: 4 pages, 3 figures, Proceedings of 10th International Conference on Sampling Theory and Applications (SampTA), Code available at https://github.com/basp-group/sopt, Full journal letter available at http://arxiv.org/abs/arXiv:1208.233
    corecore