7 research outputs found

    Learning brain regions via large-scale online structured sparse dictionary-learning

    Get PDF
    International audienceWe propose a multivariate online dictionary-learning method for obtaining de-compositions of brain images with structured and sparse components (aka atoms). Sparsity is to be understood in the usual sense: the dictionary atoms are constrained to contain mostly zeros. This is imposed via an 1-norm constraint. By "struc-tured", we mean that the atoms are piece-wise smooth and compact, thus making up blobs, as opposed to scattered patterns of activation. We propose to use a Sobolev (Laplacian) penalty to impose this type of structure. Combining the two penalties, we obtain decompositions that properly delineate brain structures from functional images. This non-trivially extends the online dictionary-learning work of Mairal et al. (2010), at the price of only a factor of 2 or 3 on the overall running time. Just like the Mairal et al. (2010) reference method, the online nature of our proposed algorithm allows it to scale to arbitrarily sized datasets. Experiments on brain data show that our proposed method extracts structured and denoised dictionaries that are more intepretable and better capture inter-subject variability in small medium, and large-scale regimes alike, compared to state-of-the-art models

    Network insensitivity to parameter noise via adversarial regularization

    Full text link
    Neuromorphic neural network processors, in the form of compute-in-memory crossbar arrays of memristors, or in the form of subthreshold analog and mixed-signal ASICs, promise enormous advantages in compute density and energy efficiency for NN-based ML tasks. However, these technologies are prone to computational non-idealities, due to process variation and intrinsic device physics. This degrades the task performance of networks deployed to the processor, by introducing parameter noise into the deployed model. While it is possible to calibrate each device, or train networks individually for each processor, these approaches are expensive and impractical for commercial deployment. Alternative methods are therefore needed to train networks that are inherently robust against parameter variation, as a consequence of network architecture and parameters. We present a new adversarial network optimisation algorithm that attacks network parameters during training, and promotes robust performance during inference in the face of parameter variation. Our approach introduces a regularization term penalising the susceptibility of a network to weight perturbation. We compare against previous approaches for producing parameter insensitivity such as dropout, weight smoothing and introducing parameter noise during training. We show that our approach produces models that are more robust to targeted parameter variation, and equally robust to random parameter variation. Our approach finds minima in flatter locations in the weight-loss landscape compared with other approaches, highlighting that the networks found by our technique are less sensitive to parameter perturbation. Our work provides an approach to deploy neural network architectures to inference devices that suffer from computational non-idealities, with minimal loss of performance. ..

    Filtered Variation method for denoising and sparse signal processing

    Get PDF
    We propose a new framework, called Filtered Variation (FV), for denoising and sparse signal processing applications. These problems are inherently ill-posed. Hence, we provide regularization to overcome this challenge by using discrete time filters that are widely used in signal processing. We mathematically define the FV problem, and solve it using alternating projections in space and transform domains. We provide a globally convergent algorithm based on the projections onto convex sets approach. We apply to our algorithm to real denoising problems and compare it with the total variation recovery

    Filtered Variation method for denoising and sparse signal processing

    Get PDF
    We propose a new framework, called Filtered Variation (FV), for denoising and sparse signal processing applications. These problems are inherently ill-posed. Hence, we provide regularization to overcome this challenge by using discrete time filters that are widely used in signal processing. We mathematically define the FV problem, and solve it using alternating projections in space and transform domains. We provide a globally convergent algorithm based on the projections onto convex sets approach. We apply to our algorithm to real denoising problems and compare it with the total variation recovery. © 2012 IEEE

    On alternating direction methods for monotropic semidefinite programming

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore