22 research outputs found
Optimizing quantum gates towards the scale of logical qubits
A foundational assumption of quantum error correction theory is that quantum
gates can be scaled to large processors without exceeding the error-threshold
for fault tolerance. Two major challenges that could become fundamental
roadblocks are manufacturing high performance quantum hardware and engineering
a control system that can reach its performance limits. The control challenge
of scaling quantum gates from small to large processors without degrading
performance often maps to non-convex, high-constraint, and time-dependent
control optimization over an exponentially expanding configuration space. Here
we report on a control optimization strategy that can scalably overcome the
complexity of such problems. We demonstrate it by choreographing the frequency
trajectories of 68 frequency-tunable superconducting qubits to execute single-
and two-qubit gates while mitigating computational errors. When combined with a
comprehensive model of physical errors across our processor, the strategy
suppresses physical error rates by compared with the case of no
optimization. Furthermore, it is projected to achieve a similar performance
advantage on a distance-23 surface code logical qubit with 1057 physical
qubits. Our control optimization strategy solves a generic scaling challenge in
a way that can be adapted to other quantum algorithms, operations, and
computing architectures
Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits
Scalable quantum computing can become a reality with error correction,
provided coherent qubits can be constructed in large arrays. The key premise is
that physical errors can remain both small and sufficiently uncorrelated as
devices scale, so that logical error rates can be exponentially suppressed.
However, energetic impacts from cosmic rays and latent radioactivity violate
both of these assumptions. An impinging particle ionizes the substrate,
radiating high energy phonons that induce a burst of quasiparticles, destroying
qubit coherence throughout the device. High-energy radiation has been
identified as a source of error in pilot superconducting quantum devices, but
lacking a measurement technique able to resolve a single event in detail, the
effect on large scale algorithms and error correction in particular remains an
open question. Elucidating the physics involved requires operating large
numbers of qubits at the same rapid timescales as in error correction, exposing
the event's evolution in time and spread in space. Here, we directly observe
high-energy rays impacting a large-scale quantum processor. We introduce a
rapid space and time-multiplexed measurement method and identify large bursts
of quasiparticles that simultaneously and severely limit the energy coherence
of all qubits, causing chip-wide failure. We track the events from their
initial localised impact to high error rates across the chip. Our results
provide direct insights into the scale and dynamics of these damaging error
bursts in large-scale devices, and highlight the necessity of mitigation to
enable quantum computing to scale