6,746 research outputs found
Geometrical organization of solutions to random linear Boolean equations
The random XORSAT problem deals with large random linear systems of Boolean
variables. The difficulty of such problems is controlled by the ratio of number
of equations to number of variables. It is known that in some range of values
of this parameter, the space of solutions breaks into many disconnected
clusters. Here we study precisely the corresponding geometrical organization.
In particular, the distribution of distances between these clusters is computed
by the cavity method. This allows to study the `x-satisfiability' threshold,
the critical density of equations where there exist two solutions at a given
distance.Comment: 20 page
Tailoring surface codes for highly biased noise
The surface code, with a simple modification, exhibits ultra-high error
correction thresholds when the noise is biased towards dephasing. Here, we
identify features of the surface code responsible for these ultra-high
thresholds. We provide strong evidence that the threshold error rate of the
surface code tracks the hashing bound exactly for all biases, and show how to
exploit these features to achieve significant improvement in logical failure
rate. First, we consider the infinite bias limit, meaning pure dephasing. We
prove that the error threshold of the modified surface code for pure dephasing
noise is , i.e., that all qubits are fully dephased, and this threshold
can be achieved by a polynomial time decoding algorithm. We demonstrate that
the sub-threshold behavior of the code depends critically on the precise shape
and boundary conditions of the code. That is, for rectangular surface codes
with standard rough/smooth open boundaries, it is controlled by the parameter
, where and are dimensions of the surface code lattice. We
demonstrate a significant improvement in logical failure rate with pure
dephasing for co-prime codes that have , and closely-related rotated
codes, which have a modified boundary. The effect is dramatic: the same logical
failure rate achievable with a square surface code and physical qubits can
be obtained with a co-prime or rotated surface code using only
physical qubits. Finally, we use approximate maximum likelihood decoding to
demonstrate that this improvement persists for a general Pauli noise biased
towards dephasing. In particular, comparing with a square surface code, we
observe a significant improvement in logical failure rate against biased noise
using a rotated surface code with approximately half the number of physical
qubits.Comment: 18+4 pages, 24 figures; v2 includes additional coauthor (ASD) and new
results on the performance of surface codes in the finite-bias regime,
obtained with beveled surface codes and an improved tensor network decoder;
v3 published versio
Study of fault tolerant software technology for dynamic systems
The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented
Scalable Neural Network Decoders for Higher Dimensional Quantum Codes
Machine learning has the potential to become an important tool in quantum
error correction as it allows the decoder to adapt to the error distribution of
a quantum chip. An additional motivation for using neural networks is the fact
that they can be evaluated by dedicated hardware which is very fast and
consumes little power. Machine learning has been previously applied to decode
the surface code. However, these approaches are not scalable as the training
has to be redone for every system size which becomes increasingly difficult. In
this work the existence of local decoders for higher dimensional codes leads us
to use a low-depth convolutional neural network to locally assign a likelihood
of error on each qubit. For noiseless syndrome measurements, numerical
simulations show that the decoder has a threshold of around when
applied to the 4D toric code. When the syndrome measurements are noisy, the
decoder performs better for larger code sizes when the error probability is
low. We also give theoretical and numerical analysis to show how a
convolutional neural network is different from the 1-nearest neighbor
algorithm, which is a baseline machine learning method
Statistical Physics of Irregular Low-Density Parity-Check Codes
Low-density parity-check codes with irregular constructions have been
recently shown to outperform the most advanced error-correcting codes to date.
In this paper we apply methods of statistical physics to study the typical
properties of simple irregular codes.
We use the replica method to find a phase transition which coincides with
Shannon's coding bound when appropriate parameters are chosen.
The decoding by belief propagation is also studied using statistical physics
arguments; the theoretical solutions obtained are in good agreement with
simulations. We compare the performance of irregular with that of regular codes
and discuss the factors that contribute to the improvement in performance.Comment: 20 pages, 9 figures, revised version submitted to JP
Learning Curves for Mutual Information Maximization
An unsupervised learning procedure based on maximizing the mutual information
between the outputs of two networks receiving different but statistically
dependent inputs is analyzed (Becker and Hinton, Nature, 355, 92, 161). For a
generic data model, I show that in the large sample limit the structure in the
data is recognized by mutual information maximization. For a more restricted
model, where the networks are similar to perceptrons, I calculate the learning
curves for zero-temperature Gibbs learning. These show that convergence can be
rather slow, and a way of regularizing the procedure is considered.Comment: 13 pages, to appear in Phys.Rev.
Quantum XOR Games
We introduce quantum XOR games, a model of two-player one-round games that
extends the model of XOR games by allowing the referee's questions to the
players to be quantum states. We give examples showing that quantum XOR games
exhibit a wide range of behaviors that are known not to exist for standard XOR
games, such as cases in which the use of entanglement leads to an arbitrarily
large advantage over the use of no entanglement. By invoking two deep
extensions of Grothendieck's inequality, we present an efficient algorithm that
gives a constant-factor approximation to the best performance players can
obtain in a given game, both in case they have no shared entanglement and in
case they share unlimited entanglement. As a byproduct of the algorithm we
prove some additional interesting properties of quantum XOR games, such as the
fact that sharing a maximally entangled state of arbitrary dimension gives only
a small advantage over having no entanglement at all.Comment: 43 page
Simplifying Random Satisfiability Problem by Removing Frustrating Interactions
How can we remove some interactions in a constraint satisfaction problem
(CSP) such that it still remains satisfiable? In this paper we study a modified
survey propagation algorithm that enables us to address this question for a
prototypical CSP, i.e. random K-satisfiability problem. The average number of
removed interactions is controlled by a tuning parameter in the algorithm. If
the original problem is satisfiable then we are able to construct satisfiable
subproblems ranging from the original one to a minimal one with minimum
possible number of interactions. The minimal satisfiable subproblems will
provide directly the solutions of the original problem.Comment: 21 pages, 16 figure
Parameter likelihood of intrinsic ellipticity correlations
Subject of this paper are the statistical properties of ellipticity
alignments between galaxies evoked by their coupled angular momenta. Starting
from physical angular momentum models, we bridge the gap towards ellipticity
correlations, ellipticity spectra and derived quantities such as aperture
moments, comparing the intrinsic signals with those generated by gravitational
lensing, with the projected galaxy sample of EUCLID in mind. We investigate the
dependence of intrinsic ellipticity correlations on cosmological parameters and
show that intrinsic ellipticity correlations give rise to non-Gaussian
likelihoods as a result of nonlinear functional dependencies. Comparing
intrinsic ellipticity spectra to weak lensing spectra we quantify the magnitude
of their contaminating effect on the estimation of cosmological parameters and
find that biases on dark energy parameters are very small in an
angular-momentum based model in contrast to the linear alignment model commonly
used. Finally, we quantify whether intrinsic ellipticities can be measured in
the presence of the much stronger weak lensing induced ellipticity
correlations, if prior knowledge on a cosmological model is assumed.Comment: 14 pages, 8 figures, submitted to MNRA
- …