5,349 research outputs found
Adiabatic evolution on a spatial-photonic Ising machine
Combinatorial optimization problems are crucial for widespread applications
but remain difficult to solve on a large scale with conventional hardware.
Novel optical platforms, known as coherent or photonic Ising machines, are
attracting considerable attention as accelerators on optimization tasks
formulable as Ising models. Annealing is a well-known technique based on
adiabatic evolution for finding optimal solutions in classical and quantum
systems made by atoms, electrons, or photons. Although various Ising machines
employ annealing in some form, adiabatic computing on optical settings has been
only partially investigated. Here, we realize the adiabatic evolution of
frustrated Ising models with 100 spins programmed by spatial light modulation.
We use holographic and optical control to change the spin couplings
adiabatically, and exploit experimental noise to explore the energy landscape.
Annealing enhances the convergence to the Ising ground state and allows to find
the problem solution with probability close to unity. Our results demonstrate a
photonic scheme for combinatorial optimization in analogy with adiabatic
quantum algorithms and enforced by optical vector-matrix multiplications and
scalable photonic technology.Comment: 9 pages, 4 figure
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.Comment: (Under review
- …