745 research outputs found
Modern Approaches to Topological Quantum Error Correction
The construction of a large-scale fault-tolerant quantum computer is an outstanding scientiļ¬c and technological goal. It holds the promise to allow us to solve a variety of complex problems such as factoring large numbers, quick database search, and the quantum simulation of many-body quantum systems in ļ¬elds as diverse as condensed matter, quantum chemistry, and even high-energy physics. Sophisticated theoretical protocols for reliable quantum information processing under imperfect conditions have been de-veloped, when errors aļ¬ect and corrupt the fragile quantum states during storage and computations. Arguably, the most realistic and promising ap-proach towards practical fault-tolerant quantum computation are topologi-cal quantum error-correcting codes, where quantum information is stored in interacting, topologically ordered 2D or 3D many-body quantum systems. This approach oļ¬ers the highest known error thresholds, which are already today within reach of the experimental accuracy in state-of-the-art setups. A combination of theoretical and experimental research is needed to store, protect and process fragile quantum information in logical qubits eļ¬ectively so that they can outperform their constituting physical qubits. Whereas small-scale quantum error correction codes have been implemented, one of the main theoretical challenges remains to develop new and improve existing eļ¬cient strategies (so-called decoders) to derive (near-)optimal error cor-rection operations in the presence of experimentally accessible measurement information and realistic noise sources. One main focus of this project is the development and numerical implementation of scalable, eļ¬cient decoders to operate topological color codes. Additionally, we study the feasibility of im-plementing quantum error-correcting codes fault-tolerantly in near-term ion traps. To this end, we use realistic modeling of the diļ¬erent noise sources, computer simulations, and most modern quantum information approaches to quantum circuitry and noise suppression techniques
Resource optimization for fault-tolerant quantum computing
In this thesis we examine a variety of techniques for reducing the resources
required for fault-tolerant quantum computation. First, we show how to simplify
universal encoded computation by using only transversal gates and standard
error correction procedures, circumventing existing no-go theorems. We then
show how to simplify ancilla preparation, reducing the cost of error correction
by more than a factor of four. Using this optimized ancilla preparation, we
develop improved techniques for proving rigorous lower bounds on the noise
threshold.
Additional overhead can be incurred because quantum algorithms must be
translated into sequences of gates that are actually available in the quantum
computer. In particular, arbitrary single-qubit rotations must be decomposed
into a discrete set of fault-tolerant gates. We find that by using a special
class of non-deterministic circuits, the cost of decomposition can be reduced
by as much as a factor of four over state-of-the-art techniques, which
typically use deterministic circuits.
Finally, we examine global optimization of fault-tolerant quantum circuits
under physical connectivity constraints. We adapt techniques from VLSI in order
to minimize time and space usage for computations in the surface code, and we
develop a software prototype to demonstrate the potential savings.Comment: 231 pages, Ph.D. thesis, University of Waterlo
Large-Scale Topological Quantum Computing with and without Majorana Fermions
Quantum computers are devices that can solve certain problems faster than ordinary, classical computers. The fundamental units of quantum information are qubits, superpositions of two states, a "zero" state and a "one" state. There are various approaches to construct such two-level systems, among others, using superconducting circuits, trapped ions or photons. A common feature of these physical systems is that their coherence times are relatively short compared to the length of useful computations. Superconducting qubits, for instance, are currently the most advanced solid-state qubits, but they decohere after around 100 microseconds, and any information stored in these qubits is lost. On the other hand, useful quantum computations may require quantum information to survive on time scales that are many orders of magnitude longer, as their runtimes can reach several hours or even days. Topological quantum computing is an approach to construct qubits that survive for the entire duration of such a long computation.
Topological quantum computing comes in two flavors. The condensed matter approach is to build error-resilient qubits using exotic quasiparticles in topological materials, most prominently Majorana zero modes in topological superconductors. Even though no such qubit has been built to date, the hope is that their coherence times may be significantly longer than the coherence times of currently available solid-state qubits, but are still expected to be too short for large-scale quantum computing. The quantum information approach is to combine many error-prone qubits to build more robust logical qubits using topological error-correcting codes, e.g., surface codes. Even though the first approach is hardware-based and the second approach is software-based, they are deeply related. With Majorana-based qubits, the main logical operations are Majorana fermion parity measurements. By replacing Majorana-based qubits with surface-code patches and parity measurements with lattice-surgery operations, schemes for quantum computation with Majorana-based qubits or with surface codes can be identical.
In this thesis, we explore how to construct a large-scale topological fault-tolerant quantum computer that can perform useful quantum computations. Here, topological refers to the nature of the quantum error-correcting code, while the underlying hardware may be based on non-topological qubits, but could also be composed of Majorana-based qubits. We provide a complete picture of such a large-scale device, breaking down large quantum computations into logical qubits and logical operations, describing how these logical operations are performed on the level of physical qubits and physical gates, and finally discussing how these physical qubits can be pieced together in a Majorana-based system using topological superconducting nanowires
Q-Pandora Unboxed: Characterizing Noise Resilience of Quantum Error Correction Codes
Quantum error correction codes (QECCs) are critical for realizing reliable
quantum computing by protecting fragile quantum states against noise and
errors. However, limited research has analyzed the noise resilience of QECCs to
help select optimal codes. This paper conducts a comprehensive study analyzing
two QECCs - rotated and unrotated surface codes - under different error types
and noise models using simulations. Among them, rotated surface codes perform
best with higher thresholds attributed to simplicity and lower qubit overhead.
The noise threshold, or the point at which QECCs become ineffective, surpasses
the error rate found in contemporary quantum processors. When confronting
quantum hardware where a specific error or noise model is dominant, a
discernible hierarchy emerges for surface code implementation in terms of
resource demand. This ordering is consistently observed across unrotated, and
rotated surface codes. Our noise model analysis ranks the code-capacity model
as the most pessimistic and circuit-level model as the most realistic. The
study maps error thresholds, revealing surface code's advantage over modern
quantum processors. It also shows higher code distances and rounds consistently
improve performance. However, excessive distances needlessly increase qubit
overhead. By matching target logical error rates and feasible number of qubits
to optimal surface code parameters, our study demonstrates the necessity of
tailoring these codes to balance reliability and qubit resources. Conclusively,
we underscore the significance of addressing the notable challenges associated
with surface code overheads and qubit improvements.Comment: 15 pages; 9 figures; 3 table
A practical phase gate for producing Bell violations in Majorana wires
The Gottesman-Knill theorem holds that operations from the Clifford group,
when combined with preparation and detection of qubit states in the
computational basis, are insufficient for universal quantum computation.
Indeed, any measurement results in such a system could be reproduced within a
local hidden variable theory, so that there is no need for a quantum mechanical
explanation and therefore no possibility of quantum speedup. Unfortunately,
Clifford operations are precisely the ones available through braiding and
measurement in systems supporting non-Abelian Majorana zero modes, which are
otherwise an excellent candidate for topologically protected quantum
computation. In order to move beyond the classically simulable subspace, an
additional phase gate is required. This phase gate allows the system to violate
the Bell-like CHSH inequality that would constrain a local hidden variable
theory. In this article, we both demonstrate the procedure for measuring Bell
violations in Majorana systems and introduce a new type of phase gate for the
already existing semiconductor-based Majorana wire systems. We conclude with an
experimentally feasible schematic combining the two, which should potentially
lead to the demonstration of Bell violation in a Majorana experiment in the
near future. Our work also naturally leads to a well-defined platform for
universal fault-tolerant quantum computation using Majorana zero modes, which
we describe.Comment: 11 pages, 13 figures; Title and references update
Parallel window decoding enables scalable fault tolerant quantum computation
Quantum Error Correction (QEC) continuously generates a stream of syndrome
data that contains information about the errors in the system. Useful
fault-tolerant quantum computation requires online decoders that are capable of
processing this syndrome data at the rate it is received. Otherwise, a data
backlog is created that grows exponentially with the -gate depth of the
computation. Superconducting quantum devices can perform QEC rounds in sub-1
s time, setting a stringent requirement on the speed of the decoders. All
current decoder proposals have a maximum code size beyond which the processing
of syndromes becomes too slow to keep up with the data acquisition, thereby
making the fault-tolerant computation not scalable. Here, we will present a
methodology that parallelizes the decoding problem and achieves almost
arbitrary syndrome processing speed. Our parallelization requires some
classical feedback decisions to be delayed, leading to a slow-down of the
logical clock speed. However, the slow-down is now polynomial in code size and
so an exponential backlog is averted. Furthermore, using known
auto-teleportation gadgets the slow-down can be eliminated altogether in
exchange for increased qubit overhead, all polynomially scaling. We demonstrate
our parallelization speed-up using a Python implementation, combining it with
both union-find and minimum weight perfect matching. Furthermore, we show that
the algorithm imposes no noticeable reduction in logical fidelity compared to
the original global decoder. Finally, we discuss how the same methodology can
be implemented in online hardware decoders.Comment: 12 pages, 7 figure
Towards practical linear optical quantum computing
Quantum computing promises a new paradigm of computation where information is processed in a way that has no classical analogue. There are a number of physical platforms conducive to quantum computation, each with a number of advantages and challenges. Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. Their low decoherence rates make them particularly favourable, however the inability to perform deterministic two-qubit gates and the issue of photon loss are challenges that need to be overcome.
In this thesis we explore the construction of a linear optical quantum computer based on the cluster state model. We identify the different necessary stages: state preparation, cluster state construction and implementation of quantum error correcting codes, and address the challenges that arise in each of these stages. For the state preparation, we propose a series of linear optical circuits for the generation of small entangled states, assessing their performance under different scenarios. For the cluster state construction, we introduce a ballistic scheme which not only consumes an order of magnitude fewer resources than previously proposed schemes, but also benefits from a natural loss tolerance. Based on this scheme, we propose a full architectural blueprint with fixed physical depth. We make investigations into the resource efficiency of this architecture and propose a new multiplexing scheme which optimises the use of resources. Finally, we study the integration of quantum error-correcting codes in the linear optical scheme proposed and suggest three ways in which the linear optical scheme can be made fault-tolerant.Open Acces
Long-range-enhanced surface codes
The surface code is a quantum error-correcting code for one logical qubit,
protected by spatially localized parity checks in two dimensions. Due to
fundamental constraints from spatial locality, storing more logical qubits
requires either sacrificing the robustness of the surface code against errors
or increasing the number of physical qubits. We bound the minimal number of
spatially non-local parity checks necessary to add logical qubits to a surface
code while maintaining, or improving, robustness to errors. We asymptotically
saturate this bound using a family of hypergraph product codes, interpolating
between the surface code and constant-rate low-density parity-check codes.
Fault-tolerant protocols for logical operations generalize naturally to these
longer-range codes, based on those from ordinary surface codes. We provide
near-term practical implementations of this code for hardware based on trapped
ions or neutral atoms in mobile optical tweezers. Long-range-enhanced surface
codes outperform conventional surface codes using hundreds of physical qubits,
and represent a practical strategy to enhance the robustness of logical qubits
to errors in near-term devices.Comment: 16 pages, 12 figures; v2 changes: fixed typos and added citation
- ā¦