4,842 research outputs found
Quantum annealing for the number partitioning problem using a tunable spin glass of ions
Exploiting quantum properties to outperform classical ways of
information-processing is an outstanding goal of modern physics. A promising
route is quantum simulation, which aims at implementing relevant and
computationally hard problems in controllable quantum systems. Here we
demonstrate that in a trapped ion setup, with present day technology, it is
possible to realize a spin model of the Mattis type that exhibits spin glass
phases. Remarkably, our method produces the glassy behavior without the need
for any disorder potential, just by controlling the detuning of the spin-phonon
coupling. Applying a transverse field, the system can be used to benchmark
quantum annealing strategies which aim at reaching the ground state of the spin
glass starting from the paramagnetic phase. In the vicinity of a phonon
resonance, the problem maps onto number partitioning, and instances which are
difficult to address classically can be implemented.Comment: accepted version (11 pages, 7 figures
Learned Belief-Propagation Decoding with Simple Scaling and SNR Adaptation
We consider the weighted belief-propagation (WBP) decoder recently proposed
by Nachmani et al. where different weights are introduced for each Tanner graph
edge and optimized using machine learning techniques. Our focus is on
simple-scaling models that use the same weights across certain edges to reduce
the storage and computational burden. The main contribution is to show that
simple scaling with few parameters often achieves the same gain as the full
parameterization. Moreover, several training improvements for WBP are proposed.
For example, it is shown that minimizing average binary cross-entropy is
suboptimal in general in terms of bit error rate (BER) and a new "soft-BER"
loss is proposed which can lead to better performance. We also investigate
parameter adapter networks (PANs) that learn the relation between the
signal-to-noise ratio and the WBP parameters. As an example, for the (32,16)
Reed-Muller code with a highly redundant parity-check matrix, training a PAN
with soft-BER loss gives near-maximum-likelihood performance assuming simple
scaling with only three parameters.Comment: 5 pages, 5 figures, submitted to ISIT 201
Neuron as a reward-modulated combinatorial switch and a model of learning behavior
This paper proposes a neuronal circuitry layout and synaptic plasticity
principles that allow the (pyramidal) neuron to act as a "combinatorial
switch". Namely, the neuron learns to be more prone to generate spikes given
those combinations of firing input neurons for which a previous spiking of the
neuron had been followed by a positive global reward signal. The reward signal
may be mediated by certain modulatory hormones or neurotransmitters, e.g., the
dopamine. More generally, a trial-and-error learning paradigm is suggested in
which a global reward signal triggers long-term enhancement or weakening of a
neuron's spiking response to the preceding neuronal input firing pattern. Thus,
rewards provide a feedback pathway that informs neurons whether their spiking
was beneficial or detrimental for a particular input combination. The neuron's
ability to discern specific combinations of firing input neurons is achieved
through a random or predetermined spatial distribution of input synapses on
dendrites that creates synaptic clusters that represent various permutations of
input neurons. The corresponding dendritic segments, or the enclosed individual
spines, are capable of being particularly excited, due to local sigmoidal
thresholding involving voltage-gated channel conductances, if the segment's
excitatory and absence of inhibitory inputs are temporally coincident. Such
nonlinear excitation corresponds to a particular firing combination of input
neurons, and it is posited that the excitation strength encodes the
combinatorial memory and is regulated by long-term plasticity mechanisms. It is
also suggested that the spine calcium influx that may result from the
spatiotemporal synaptic input coincidence may cause the spine head actin
filaments to undergo mechanical (muscle-like) contraction, with the ensuing
cytoskeletal deformation transmitted to the axon initial segment where it
may...Comment: Version 5: added computer code in the ancillary files sectio
The effect of negative feedback loops on the dynamics of Boolean networks
Feedback loops in a dynamic network play an important role in determining the
dynamics of that network. Through a computational study, in this paper we show
that networks with fewer independent negative feedback loops tend to exhibit
more regular behavior than those with more negative loops. To be precise, we
study the relationship between the number of independent feedback loops and the
number and length of the limit cycles in the phase space of dynamic Boolean
networks. We show that, as the number of independent negative feedback loops
increases, the number (length) of limit cycles tends to decrease (increase).
These conclusions are consistent with the fact, for certain natural biological
networks, that they on the one hand exhibit generally regular behavior and on
the other hand show less negative feedback loops than randomized networks with
the same numbers of nodes and connectivity
Meta-learning computational intelligence architectures
In computational intelligence, the term \u27memetic algorithm\u27 has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a \u27meme\u27 has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as \u27memetic algorithm\u27 is too specific, and ultimately a misnomer, as much as a \u27meme\u27 is defined too generally to be of scientific use. In this dissertation the notion of memes and meta-learning is extended from a computational viewpoint and the purpose, definitions, design guidelines and architecture for effective meta-learning are explored. The background and structure of meta-learning architectures is discussed, incorporating viewpoints from psychology, sociology, computational intelligence, and engineering. The benefits and limitations of meme-based learning are demonstrated through two experimental case studies -- Meta-Learning Genetic Programming and Meta- Learning Traveling Salesman Problem Optimization. Additionally, the development and properties of several new algorithms are detailed, inspired by the previous case-studies. With applications ranging from cognitive science to machine learning, meta-learning has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning --Abstract, page iii
The Dynamic Phase Transition for Decoding Algorithms
The state-of-the-art error correcting codes are based on large random
constructions (random graphs, random permutations, ...) and are decoded by
linear-time iterative algorithms. Because of these features, they are
remarkable examples of diluted mean-field spin glasses, both from the static
and from the dynamic points of view. We analyze the behavior of decoding
algorithms using the mapping onto statistical-physics models. This allows to
understand the intrinsic (i.e. algorithm independent) features of this
behavior.Comment: 40 pages, 29 eps figure
- …