189,067 research outputs found

    Neural Sampling by Irregular Gating Inhibition of Spiking Neurons and Attractor Networks

    Full text link
    A long tradition in theoretical neuroscience casts sensory processing in the brain as the process of inferring the maximally consistent interpretations of imperfect sensory input. Recently it has been shown that Gamma-band inhibition can enable neural attractor networks to approximately carry out such a sampling mechanism. In this paper we propose a novel neural network model based on irregular gating inhibition, show analytically how it implements a Monte-Carlo Markov Chain (MCMC) sampler, and describe how it can be used to model networks of both neural attractors as well as of single spiking neurons. Finally we show how this model applied to spiking neurons gives rise to a new putative mechanism that could be used to implement stochastic synaptic weights in biological neural networks and in neuromorphic hardware

    Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees

    Full text link
    Asynchronous distributed algorithms are a popular way to reduce synchronization costs in large-scale optimization, and in particular for neural network training. However, for nonsmooth and nonconvex objectives, few convergence guarantees exist beyond cases where closed-form proximal operator solutions are available. As most popular contemporary deep neural networks lead to nonsmooth and nonconvex objectives, there is now a pressing need for such convergence guarantees. In this paper, we analyze for the first time the convergence of stochastic asynchronous optimization for this general class of objectives. In particular, we focus on stochastic subgradient methods allowing for block variable partitioning, where the shared-memory-based model is asynchronously updated by concurrent processes. To this end, we first introduce a probabilistic model which captures key features of real asynchronous scheduling between concurrent processes; under this model, we establish convergence with probability one to an invariant set for stochastic subgradient methods with momentum. From the practical perspective, one issue with the family of methods we consider is that it is not efficiently supported by machine learning frameworks, as they mostly focus on distributed data-parallel strategies. To address this, we propose a new implementation strategy for shared-memory based training of deep neural networks, whereby concurrent parameter servers are utilized to train a partitioned but shared model in single- and multi-GPU settings. Based on this implementation, we achieve on average 1.2x speed-up in comparison to state-of-the-art training methods for popular image classification tasks without compromising accuracy
    • …
    corecore