3,195 research outputs found

    Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations

    Full text link
    In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity. The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best 'combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa

    Counting solutions from finite samplings

    Full text link
    We formulate the solution counting problem within the framework of inverse Ising problem and use fast belief propagation equations to estimate the entropy whose value provides an estimate on the true one. We test this idea on both diluted models (random 2-SAT and 3-SAT problems) and fully-connected model (binary perceptron), and show that when the constraint density is small, this estimate can be very close to the true value. The information stored by the salamander retina under the natural movie stimuli can also be estimated and our result is consistent with that obtained by Monte Carlo method. Of particular significance is sizes of other metastable states for this real neuronal network are predicted.Comment: 9 pages, 4 figures and 1 table, further discussions adde

    The jamming transition in high dimension: an analytical study of the TAP equations and the effective thermodynamic potential

    Full text link
    We present a parallel derivation of the Thouless-Anderson-Palmer (TAP) equations and of an effective potential for the negative perceptron and soft sphere models in high dimension. Both models are continuous constrained satisfaction problems with a critical jamming transition characterized by the same exponents. Our analysis reveals that a power expansion of the potential up to the second order represents a successful framework to approach the jamming line from the SAT phase (the region of the phase diagram where at least one configuration verifies all the constraints), where the ground-state energy is zero. An interesting outcome is that close to jamming the effective thermodynamic potential has a logarithmic contribution, which turns out to be dominant in a proper scaling regime. Our approach is quite general and can be directly applied to other interesting models. Finally, we study the spectrum of small harmonic fluctuations in the SAT phase recovering the typical scaling D(ω)∼ω2D(\omega) \sim \omega^2 below the cutoff frequency but a different behavior characterized by a non-trivial exponent above it.Comment: 11 pages; a few typos correcte

    On the role of synaptic stochasticity in training low-precision neural networks

    Get PDF
    Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes. Here we show that a neural network model with stochastic binary weights naturally gives prominence to exponentially rare dense regions of solutions with a number of desirable properties such as robustness and good generalization performance, while typical solutions are isolated and hard to find. Binary solutions of the standard perceptron problem are obtained from a simple gradient descent procedure on a set of real values parametrizing a probability distribution over the binary synapses. Both analytical and numerical results are presented. An algorithmic extension aimed at training discrete deep neural networks is also investigated.Comment: 7 pages + 14 pages of supplementary materia
    • …
    corecore