319 research outputs found
Mapping constrained optimization problems to quantum annealing with application to fault diagnosis
Current quantum annealing (QA) hardware suffers from practical limitations
such as finite temperature, sparse connectivity, small qubit numbers, and
control error. We propose new algorithms for mapping boolean constraint
satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In
particular we develop a new embedding algorithm for mapping a CSP onto a
hardware Ising model with a fixed sparse set of interactions, and propose two
new decomposition algorithms for solving problems too large to map directly
into hardware.
The mapping technique is locally-structured, as hardware compatible Ising
models are generated for each problem constraint, and variables appearing in
different constraints are chained together using ferromagnetic couplings. In
contrast, global embedding techniques generate a hardware independent Ising
model for all the constraints, and then use a minor-embedding algorithm to
generate a hardware compatible Ising model. We give an example of a class of
CSPs for which the scaling performance of D-Wave's QA hardware using the local
mapping technique is significantly better than global embedding.
We validate the approach by applying D-Wave's hardware to circuit-based
fault-diagnosis. For circuits that embed directly, we find that the hardware is
typically able to find all solutions from a min-fault diagnosis set of size N
using 1000N samples, using an annealing rate that is 25 times faster than a
leading SAT-based sampling method. Further, we apply decomposition algorithms
to find min-cardinality faults for circuits that are up to 5 times larger than
can be solved directly on current hardware.Comment: 22 pages, 4 figure
Inferring hidden states in Langevin dynamics on large networks: Average case performance
We present average performance results for dynamical inference problems in
large networks, where a set of nodes is hidden while the time trajectories of
the others are observed. Examples of this scenario can occur in signal
transduction and gene regulation networks. We focus on the linear stochastic
dynamics of continuous variables interacting via random Gaussian couplings of
generic symmetry. We analyze the inference error, given by the variance of the
posterior distribution over hidden paths, in the thermodynamic limit and as a
function of the system parameters and the ratio {\alpha} between the number of
hidden and observed nodes. By applying Kalman filter recursions we find that
the posterior dynamics is governed by an "effective" drift that incorporates
the effect of the observations. We present two approaches for characterizing
the posterior variance that allow us to tackle, respectively, equilibrium and
nonequilibrium dynamics. The first appeals to Random Matrix Theory and reveals
average spectral properties of the inference error and typical posterior
relaxation times, the second is based on dynamical functionals and yields the
inference error as the solution of an algebraic equation.Comment: 20 pages, 5 figure
On the number of limit cycles in diluted neural networks
We consider the storage properties of temporal patterns, i.e. cycles of
finite lengths, in neural networks represented by (generally asymmetric) spin
glasses defined on random graphs. Inspired by the observation that dynamics on
sparse systems have more basins of attractions than the dynamics of densely
connected ones, we consider the attractors of a greedy dynamics in sparse
topologies, considered as proxy for the stored memories. We enumerate them
using numerical simulation and extend the analysis to large systems sizes using
belief propagation. We find that the logarithm of the number of such cycles is
a non monotonic function of the mean connectivity and we discuss the
similarities with biological neural networks describing the memory capacity of
the hippocampus.Comment: 10 pages, 11 figure
- …