15,774 research outputs found
On the role of synaptic stochasticity in training low-precision neural networks
Stochasticity and limited precision of synaptic weights in neural network
models are key aspects of both biological and hardware modeling of learning
processes. Here we show that a neural network model with stochastic binary
weights naturally gives prominence to exponentially rare dense regions of
solutions with a number of desirable properties such as robustness and good
generalization performance, while typical solutions are isolated and hard to
find. Binary solutions of the standard perceptron problem are obtained from a
simple gradient descent procedure on a set of real values parametrizing a
probability distribution over the binary synapses. Both analytical and
numerical results are presented. An algorithmic extension aimed at training
discrete deep neural networks is also investigated.Comment: 7 pages + 14 pages of supplementary materia
Dreaming neural networks: forgetting spurious memories and reinforcing pure ones
The standard Hopfield model for associative neural networks accounts for
biological Hebbian learning and acts as the harmonic oscillator for pattern
recognition, however its maximal storage capacity is , far
from the theoretical bound for symmetric networks, i.e. . Inspired
by sleeping and dreaming mechanisms in mammal brains, we propose an extension
of this model displaying the standard on-line (awake) learning mechanism (that
allows the storage of external information in terms of patterns) and an
off-line (sleep) unlearningconsolidating mechanism (that allows
spurious-pattern removal and pure-pattern reinforcement): this obtained daily
prescription is able to saturate the theoretical bound , remaining
also extremely robust against thermal noise. Both neural and synaptic features
are analyzed both analytically and numerically. In particular, beyond obtaining
a phase diagram for neural dynamics, we focus on synaptic plasticity and we
give explicit prescriptions on the temporal evolution of the synaptic matrix.
We analytically prove that our algorithm makes the Hebbian kernel converge with
high probability to the projection matrix built over the pure stored patterns.
Furthermore, we obtain a sharp and explicit estimate for the "sleep rate" in
order to ensure such a convergence. Finally, we run extensive numerical
simulations (mainly Monte Carlo sampling) to check the approximations
underlying the analytical investigations (e.g., we developed the whole theory
at the so called replica-symmetric level, as standard in the
Amit-Gutfreund-Sompolinsky reference framework) and possible finite-size
effects, finding overall full agreement with the theory.Comment: 31 pages, 12 figure
Generalized Approximate Survey Propagation for High-Dimensional Estimation
In Generalized Linear Estimation (GLE) problems, we seek to estimate a signal
that is observed through a linear transform followed by a component-wise,
possibly nonlinear and noisy, channel. In the Bayesian optimal setting,
Generalized Approximate Message Passing (GAMP) is known to achieve optimal
performance for GLE. However, its performance can significantly degrade
whenever there is a mismatch between the assumed and the true generative model,
a situation frequently encountered in practice. In this paper, we propose a new
algorithm, named Generalized Approximate Survey Propagation (GASP), for solving
GLE in the presence of prior or model mis-specifications. As a prototypical
example, we consider the phase retrieval problem, where we show that GASP
outperforms the corresponding GAMP, reducing the reconstruction threshold and,
for certain choices of its parameters, approaching Bayesian optimal
performance. Furthermore, we present a set of State Evolution equations that
exactly characterize the dynamics of GASP in the high-dimensional limit
- …