42,087 research outputs found
A Google-inspired error-correcting graph matching algorithm
Graphs and graph algorithms are applied in many different areas including
civil engineering, telecommunications, bio-informatics and software engineering.
While exact graph matching is grounded on a consolidated theory and
has well known results, approximate graph matching is still an open research
subject.
This paper presents an error tolerant approximated graph matching algorithm
based on tabu search using the Google-like PageRank algorithm. We report preliminary
results obtained on 2 graph data benchmarks. The first one is the TC-15
database [14], a graph data base at the University of Naples, Italy. These graphs
are limited to exact matching. The second one is a novel data set of large graphs
generated by randomly mutating TC-15 graphs in order to evaluate the performance
of our algorithm. Such a mutation approach allows us to gain insight not
only about time but also about matching accuracy
Data-driven decoding of quantum error correcting codes using graph neural networks
To leverage the full potential of quantum error-correcting stabilizer codes
it is crucial to have an efficient and accurate decoder. Accurate, maximum
likelihood, decoders are computationally very expensive whereas decoders based
on more efficient algorithms give sub-optimal performance. In addition, the
accuracy will depend on the quality of models and estimates of error rates for
idling qubits, gates, measurements, and resets, and will typically assume
symmetric error channels. In this work, instead, we explore a model-free,
data-driven, approach to decoding, using a graph neural network (GNN). The
decoding problem is formulated as a graph classification task in which a set of
stabilizer measurements is mapped to an annotated detector graph for which the
neural network predicts the most likely logical error class. We show that the
GNN-based decoder can outperform a matching decoder for circuit level noise on
the surface code given only simulated experimental data, even if the matching
decoder is given full information of the underlying error model. Although
training is computationally demanding, inference is fast and scales
approximately linearly with the space-time volume of the code. We also find
that we can use large, but more limited, datasets of real experimental data
[Google Quantum AI, Nature {\bf 614}, 676 (2023)] for the repetition code,
giving decoding accuracies that are on par with minimum weight perfect
matching. The results show that a purely data-driven approach to decoding may
be a viable future option for practical quantum error correction, which is
competitive in terms of speed, accuracy, and versatility.Comment: 15 pages, 12 figure
Un algorithme Hongrois pour l'appariement de graphes avec correction d'erreurs
International audienceBipartite graph matching algorithms become more and more popular to solve error-correcting graph matching problems and to approximate the graph edit distance of two graphs. However, the memory requirements and execution times of this method are respectively proportional to (n + m) 2 and (n + m) 3 where n and m are the order of the graphs. Subsequent developments reduced these complexities. However , these improvements are valid only under some constraints on the parameters of the graph edit distance. We propose in this paper a new formulation of the bipartite graph matching algorithm designed to solve efficiently the associated graph edit distance problem. The resulting algorithm requires O(nm) memory space and O(min(n, m) 2 max(n, m)) execution times.L'appariement de graphes biparti deviennent de plus en plus populaires pour résoudre des problèmes d'appariement de graphes avec correction d'erreurs et pour approximer la distance d'édition sur graphes. Cependant, les exigences en mémoire et temps de calcul de cette méthode sont respectivement proportionnels à (n + m)^2 et (n + m)^3 où n et m représentent la taille des deux graphes. Des développements ultérieurs ont réduit ces complexités. Cependant, ces améliorations ne sont valables que sous certaines contraintes sur les paramètres de la distance d'édition. Nous proposons dans cet article une nouvelle formulation de l'algorithme Hongrois conçu pour résoudre efficacement le problème de distance d'édition associé. L'algorithme résultat nécessite un espace mémoire O (nm) et des temps d'exécution O (min (n, m)^2 max (n, m))
Coding with Constraints: Minimum Distance Bounds and Systematic Constructions
We examine an error-correcting coding framework in which each coded symbol is
constrained to be a function of a fixed subset of the message symbols. With an
eye toward distributed storage applications, we seek to design systematic codes
with good minimum distance that can be decoded efficiently. On this note, we
provide theoretical bounds on the minimum distance of such a code based on the
coded symbol constraints. We refine these bounds in the case where we demand a
systematic linear code. Finally, we provide conditions under which each of
these bounds can be achieved by choosing our code to be a subcode of a
Reed-Solomon code, allowing for efficient decoding. This problem has been
considered in multisource multicast network error correction. The problem setup
is also reminiscent of locally repairable codes.Comment: Submitted to ISIT 201
Non-asymptotic Upper Bounds for Deletion Correcting Codes
Explicit non-asymptotic upper bounds on the sizes of multiple-deletion
correcting codes are presented. In particular, the largest single-deletion
correcting code for -ary alphabet and string length is shown to be of
size at most . An improved bound on the asymptotic
rate function is obtained as a corollary. Upper bounds are also derived on
sizes of codes for a constrained source that does not necessarily comprise of
all strings of a particular length, and this idea is demonstrated by
application to sets of run-length limited strings.
The problem of finding the largest deletion correcting code is modeled as a
matching problem on a hypergraph. This problem is formulated as an integer
linear program. The upper bound is obtained by the construction of a feasible
point for the dual of the linear programming relaxation of this integer linear
program.
The non-asymptotic bounds derived imply the known asymptotic bounds of
Levenshtein and Tenengolts and improve on known non-asymptotic bounds.
Numerical results support the conjecture that in the binary case, the
Varshamov-Tenengolts codes are the largest single-deletion correcting codes.Comment: 18 pages, 4 figure
Multi-path Summation for Decoding 2D Topological Codes
Fault tolerance is a prerequisite for scalable quantum computing.
Architectures based on 2D topological codes are effective for near-term
implementations of fault tolerance. To obtain high performance with these
architectures, we require a decoder which can adapt to the wide variety of
error models present in experiments. The typical approach to the problem of
decoding the surface code is to reduce it to minimum-weight perfect matching in
a way that provides a suboptimal threshold error rate, and is specialized to
correct a specific error model. Recently, optimal threshold error rates for a
variety of error models have been obtained by methods which do not use
minimum-weight perfect matching, showing that such thresholds can be achieved
in polynomial time. It is an open question whether these results can also be
achieved by minimum-weight perfect matching. In this work, we use belief
propagation and a novel algorithm for producing edge weights to increase the
utility of minimum-weight perfect matching for decoding surface codes. This
allows us to correct depolarizing errors using the rotated surface code,
obtaining a threshold of . This is larger than the threshold
achieved by previous matching-based decoders (), though
still below the known upper bound of .Comment: 19 pages, 13 figures, published in Quantum, available at
https://quantum-journal.org/papers/q-2018-10-19-102
Near-Linear Time Insertion-Deletion Codes and (1+)-Approximating Edit Distance via Indexing
We introduce fast-decodable indexing schemes for edit distance which can be
used to speed up edit distance computations to near-linear time if one of the
strings is indexed by an indexing string . In particular, for every length
and every , one can in near linear time construct a string
with , such that, indexing
any string , symbol-by-symbol, with results in a string where for which edit
distance computations are easy, i.e., one can compute a
-approximation of the edit distance between and any other
string in time.
Our indexing schemes can be used to improve the decoding complexity of
state-of-the-art error correcting codes for insertions and deletions. In
particular, they lead to near-linear time decoding algorithms for the
insertion-deletion codes of [Haeupler, Shahrasbi; STOC `17] and faster decoding
algorithms for list-decodable insertion-deletion codes of [Haeupler, Shahrasbi,
Sudan; ICALP `18]. Interestingly, the latter codes are a crucial ingredient in
the construction of fast-decodable indexing schemes
Analysing correlated noise on the surface code using adaptive decoding algorithms
Laboratory hardware is rapidly progressing towards a state where quantum
error-correcting codes can be realised. As such, we must learn how to deal with
the complex nature of the noise that may occur in real physical systems. Single
qubit Pauli errors are commonly used to study the behaviour of error-correcting
codes, but in general we might expect the environment to introduce correlated
errors to a system. Given some knowledge of structures that errors commonly
take, it may be possible to adapt the error-correction procedure to compensate
for this noise, but performing full state tomography on a physical system to
analyse this structure quickly becomes impossible as the size increases beyond
a few qubits. Here we develop and test new methods to analyse blue a particular
class of spatially correlated errors by making use of parametrised families of
decoding algorithms. We demonstrate our method numerically using a diffusive
noise model. We show that information can be learnt about the parameters of the
noise model, and additionally that the logical error rates can be improved. We
conclude by discussing how our method could be utilised in a practical setting
blue and propose extensions of our work to study more general error models.Comment: 19 pages, 8 figures, comments welcome; v2 - minor typos corrected
some references added; v3 - accepted to Quantu
- …