27,362 research outputs found
Towards practical classical processing for the surface code
The surface code is unarguably the leading quantum error correction code for
2-D nearest neighbor architectures, featuring a high threshold error rate of
approximately 1%, low overhead implementations of the entire Clifford group,
and flexible, arbitrarily long-range logical gates. These highly desirable
features come at the cost of significant classical processing complexity. We
show how to perform the processing associated with an nxn lattice of qubits,
each being manipulated in a realistic, fault-tolerant manner, in O(n^2) average
time per round of error correction. We also describe how to parallelize the
algorithm to achieve O(1) average processing per round, using only constant
computing resources per unit area and local communication. Both of these
complexities are optimal.Comment: 5 pages, 6 figures, published version with some additional tex
Towards practical classical processing for the surface code: timing analysis
Topological quantum error correction codes have high thresholds and are well
suited to physical implementation. The minimum weight perfect matching
algorithm can be used to efficiently handle errors in such codes. We perform a
timing analysis of our current implementation of the minimum weight perfect
matching algorithm. Our implementation performs the classical processing
associated with an nxn lattice of qubits realizing a square surface code
storing a single logical qubit of information in a fault-tolerant manner. We
empirically demonstrate that our implementation requires only O(n^2) average
time per round of error correction for code distances ranging from 4 to 512 and
a range of depolarizing error rates. We also describe tests we have performed
to verify that it always obtains a true minimum weight perfect matching.Comment: 13 pages, 13 figures, version accepted for publicatio
2-D Compass Codes
The compass model on a square lattice provides a natural template for
building subsystem stabilizer codes. The surface code and the Bacon-Shor code
represent two extremes of possible codes depending on how many gauge qubits are
fixed. We explore threshold behavior in this broad class of local codes by
trading locality for asymmetry and gauge degrees of freedom for stabilizer
syndrome information. We analyze these codes with asymmetric and spatially
inhomogeneous Pauli noise in the code capacity and phenomenological models. In
these idealized settings, we observe considerably higher thresholds against
asymmetric noise. At the circuit level, these codes inherit the bare-ancilla
fault-tolerance of the Bacon-Shor code.Comment: 10 pages, 7 figures, added discussion on fault-toleranc
High-threshold fault-tolerant quantum computation with analog quantum error correction
To implement fault-tolerant quantum computation with continuous variables,
the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important
technological element. However,it is still challenging to experimentally
generate the GKP qubit with the required squeezing level, 14.8 dB, of the
existing fault-tolerant quantum computation. To reduce this requirement, we
propose a high-threshold fault-tolerant quantum computation with GKP qubits
using topologically protected measurement-based quantum computation with the
surface code. By harnessing analog information contained in the GKP qubits, we
apply analog quantum error correction to the surface code.Furthermore, we
develop a method to prevent the squeezing level from decreasing during the
construction of the large scale cluster states for the topologically protected
measurement based quantum computation. We numerically show that the required
squeezing level can be relaxed to less than 10 dB, which is within the reach of
the current experimental technology. Hence, this work can considerably
alleviate this experimental requirement and take a step closer to the
realization of large scale quantum computation.Comment: 14 pages, 7 figure
Error-tolerant Finite State Recognition with Applications to Morphological Analysis and Spelling Correction
Error-tolerant recognition enables the recognition of strings that deviate
mildly from any string in the regular set recognized by the underlying finite
state recognizer. Such recognition has applications in error-tolerant
morphological processing, spelling correction, and approximate string matching
in information retrieval. After a description of the concepts and algorithms
involved, we give examples from two applications: In the context of
morphological analysis, error-tolerant recognition allows misspelled input word
forms to be corrected, and morphologically analyzed concurrently. We present an
application of this to error-tolerant analysis of agglutinative morphology of
Turkish words. The algorithm can be applied to morphological analysis of any
language whose morphology is fully captured by a single (and possibly very
large) finite state transducer, regardless of the word formation processes and
morphographemic phenomena involved. In the context of spelling correction,
error-tolerant recognition can be used to enumerate correct candidate forms
from a given misspelled string within a certain edit distance. Again, it can be
applied to any language with a word list comprising all inflected forms, or
whose morphology is fully described by a finite state transducer. We present
experimental results for spelling correction for a number of languages. These
results indicate that such recognition works very efficiently for candidate
generation in spelling correction for many European languages such as English,
Dutch, French, German, Italian (and others) with very large word lists of root
and inflected forms (some containing well over 200,000 forms), generating all
candidate solutions within 10 to 45 milliseconds (with edit distance 1) on a
SparcStation 10/41. For spelling correction in Turkish, error-tolerantComment: Replaces 9504031. gzipped, uuencoded postscript file. To appear in
Computational Linguistics Volume 22 No:1, 1996, Also available as
ftp://ftp.cs.bilkent.edu.tr/pub/ko/clpaper9512.ps.
Spectra: Robust Estimation of Distribution Functions in Networks
Distributed aggregation allows the derivation of a given global aggregate
property from many individual local values in nodes of an interconnected
network system. Simple aggregates such as minima/maxima, counts, sums and
averages have been thoroughly studied in the past and are important tools for
distributed algorithms and network coordination. Nonetheless, this kind of
aggregates may not be comprehensive enough to characterize biased data
distributions or when in presence of outliers, making the case for richer
estimates of the values on the network. This work presents Spectra, a
distributed algorithm for the estimation of distribution functions over large
scale networks. The estimate is available at all nodes and the technique
depicts important properties, namely: robust when exposed to high levels of
message loss, fast convergence speed and fine precision in the estimate. It can
also dynamically cope with changes of the sampled local property, not requiring
algorithm restarts, and is highly resilient to node churn. The proposed
approach is experimentally evaluated and contrasted to a competing state of the
art distribution aggregation technique.Comment: Full version of the paper published at 12th IFIP International
Conference on Distributed Applications and Interoperable Systems (DAIS),
Stockholm (Sweden), June 201
- …