13 research outputs found
Zero noise extrapolation on logical qubits by scaling the error correction code distance
In this work, we migrate the quantum error mitigation technique of Zero-Noise
Extrapolation (ZNE) to fault-tolerant quantum computing. We employ ZNE on
\emph{logically encoded} qubits rather than \emph{physical} qubits. This
approach will be useful in a regime where quantum error correction (QEC) is
implementable but the number of qubits available for QEC is limited. Apart from
illustrating the utility of a traditional ZNE approach (circuit-level unitary
folding) for the QEC regime, we propose a novel noise scaling ZNE method
specifically tailored to QEC: \emph{distance scaled ZNE (DS-ZNE)}. DS-ZNE
scales the distance of the error correction code, and thereby the resulting
logical error rate, and utilizes this code distance as the scaling `knob' for
ZNE. Logical qubit error rates are scaled until the maximum achievable code
distance for a fixed number of physical qubits, and lower error rates (i.e.,
effectively higher code distances) are achieved via extrapolation techniques
migrated from traditional ZNE. Furthermore, to maximize physical qubit
utilization over the ZNE experiments, logical executions at code distances
lower than the maximum allowed by the physical qubits on the quantum device are
parallelized to optimize device utilization. We validate our proposal with
numerical simulation and confirm that ZNE lowers the logical error rates and
increases the effective code distance beyond the physical capability of the
quantum device. For instance, at a physical code distance of 11, the DS-ZNE
effective code distance is 17, and at a physical code distance of 13, the
DS-ZNE effective code distance is 21. When the proposed technique is compared
against unitary folding ZNE under the constraint of a fixed number of
executions of the quantum device, DS-ZNE outperforms unitary folding by up to
92\% in terms of the post-ZNE logical error rate.Comment: 8 pages, 5 figure
VarSaw: Application-tailored Measurement Error Mitigation for Variational Quantum Algorithms
For potential quantum advantage, Variational Quantum Algorithms (VQAs) need
high accuracy beyond the capability of today's NISQ devices, and thus will
benefit from error mitigation. In this work we are interested in mitigating
measurement errors which occur during qubit measurements after circuit
execution and tend to be the most error-prone operations, especially
detrimental to VQAs. Prior work, JigSaw, has shown that measuring only small
subsets of circuit qubits at a time and collecting results across all such
subset circuits can reduce measurement errors. Then, running the entire
(global) original circuit and extracting the qubit-qubit measurement
correlations can be used in conjunction with the subsets to construct a
high-fidelity output distribution of the original circuit. Unfortunately, the
execution cost of JigSaw scales polynomially in the number of qubits in the
circuit, and when compounded by the number of circuits and iterations in VQAs,
the resulting execution cost quickly turns insurmountable.
To combat this, we propose VarSaw, which improves JigSaw in an
application-tailored manner, by identifying considerable redundancy in the
JigSaw approach for VQAs: spatial redundancy across subsets from different VQA
circuits and temporal redundancy across globals from different VQA iterations.
VarSaw then eliminates these forms of redundancy by commuting the subset
circuits and selectively executing the global circuits, reducing computational
cost (in terms of the number of circuits executed) over naive JigSaw for VQA by
25x on average and up to 1000x, for the same VQA accuracy. Further, it can
recover, on average, 45% of the infidelity from measurement errors in the noisy
VQA baseline. Finally, it improves fidelity by 55%, on average, over JigSaw for
a fixed computational budget. VarSaw can be accessed here:
https://github.com/siddharthdangwal/VarSaw.Comment: Appears at the International Conference on Architectural Support for
Programming Languages and Operating Systems (ASPLOS) 2024. First two authors
contributed equall
DISQ: Dynamic Iteration Skipping for Variational Quantum Algorithms
This paper proposes DISQ to craft a stable landscape for VQA training and
tackle the noise drift challenge. DISQ adopts a "drift detector" with a
reference circuit to identify and skip iterations that are severely affected by
noise drift errors. Specifically, the circuits from the previous training
iteration are re-executed as a reference circuit in the current iteration to
estimate noise drift impacts. The iteration is deemed compromised by noise
drift errors and thus skipped if noise drift flips the direction of the ideal
optimization gradient. To enhance noise drift detection reliability, we further
propose to leverage multiple reference circuits from previous iterations to
provide a well founded judge of current noise drift. Nevertheless, multiple
reference circuits also introduce considerable execution overhead. To mitigate
extra overhead, we propose Pauli-term subsetting (prime and minor subsets) to
execute only observable circuits with large coefficient magnitudes (prime
subset) during drift detection. Only this minor subset is executed when the
current iteration is drift-free. Evaluations across various applications and
QPUs demonstrate that DISQ can mitigate a significant portion of the noise
drift impact on VQAs and achieve 1.51-2.24x fidelity improvement over the
traditional baseline. DISQ's benefit is 1.1-1.9x over the best alternative
approach while boosting average noise detection speed by 2.07
SnCQA: A hardware-efficient equivariant quantum convolutional circuit architecture
We propose SnCQA, a set of hardware-efficient variational circuits of
equivariant quantum convolutional circuits respective to permutation symmetries
and spatial lattice symmetries with the number of qubits . By exploiting
permutation symmetries of the system, such as lattice Hamiltonians common to
many quantum many-body and quantum chemistry problems, Our quantum neural
networks are suitable for solving machine learning problems where permutation
symmetries are present, which could lead to significant savings of
computational costs. Aside from its theoretical novelty, we find our
simulations perform well in practical instances of learning ground states in
quantum computational chemistry, where we could achieve comparable performances
to traditional methods with few tens of parameters. Compared to other
traditional variational quantum circuits, such as the pure hardware-efficient
ansatz (pHEA), we show that SnCQA is more scalable, accurate, and noise
resilient (with better performance on square lattice
and resource savings in various lattice sizes and key
criterions such as the number of layers, parameters, and times to converge in
our cases), suggesting a potentially favorable experiment on near-time quantum
devices.Comment: 10 pages, many figures. IEEE QCE 2023, 1st best paper award in
quantum algorithm
Towards an end-to-end analysis and prediction system for weather, climate, and marine applications in the Red Sea
Author Posting. © American Meteorological Society, 2021. This article is posted here by permission of American Meteorological Society for personal use, not for redistribution. The definitive version was published in Bulletin of the American Meteorological Society 102(1), (2021): E99-E122, https://doi.org/10.1175/BAMS-D-19-0005.1.The Red Sea, home to the second-longest coral reef system in the world, is a vital resource for the Kingdom of Saudi Arabia. The Red Sea provides 90% of the Kingdom’s potable water by desalinization, supporting tourism, shipping, aquaculture, and fishing industries, which together contribute about 10%–20% of the country’s GDP. All these activities, and those elsewhere in the Red Sea region, critically depend on oceanic and atmospheric conditions. At a time of mega-development projects along the Red Sea coast, and global warming, authorities are working on optimizing the harnessing of environmental resources, including renewable energy and rainwater harvesting. All these require high-resolution weather and climate information. Toward this end, we have undertaken a multipronged research and development activity in which we are developing an integrated data-driven regional coupled modeling system. The telescopically nested components include 5-km- to 600-m-resolution atmospheric models to address weather and climate challenges, 4-km- to 50-m-resolution ocean models with regional and coastal configurations to simulate and predict the general and mesoscale circulation, 4-km- to 100-m-resolution ecosystem models to simulate the biogeochemistry, and 1-km- to 50-m-resolution wave models. In addition, a complementary probabilistic transport modeling system predicts dispersion of contaminant plumes, oil spill, and marine ecosystem connectivity. Advanced ensemble data assimilation capabilities have also been implemented for accurate forecasting. Resulting achievements include significant advancement in our understanding of the regional circulation and its connection to the global climate, development, and validation of long-term Red Sea regional atmospheric–oceanic–wave reanalyses and forecasting capacities. These products are being extensively used by academia, government, and industry in various weather and marine studies and operations, environmental policies, renewable energy applications, impact assessment, flood forecasting, and more.The development of the Red Sea modeling system is being supported by the Virtual Red Sea Initiative and the Competitive Research Grants (CRG) program from the Office of Sponsored Research at KAUST, Saudi Aramco Company through the Saudi ARAMCO Marine Environmental Center at KAUST, and by funds from KAEC, NEOM, and RSP through Beacon Development Company at KAUST
Clifford Assisted Optimal Pass Selection for Quantum Transpilation
The fidelity of quantum programs in the NISQ era is limited by high levels of
device noise. To increase the fidelity of quantum programs running on NISQ
devices, a variety of optimizations have been proposed. These include mapping
passes, routing passes, scheduling methods and standalone optimisations which
are usually incorporated into a transpiler as passes. Popular transpilers such
as those proposed by Qiskit, Cirq and Cambridge Quantum Computing make use of
these extensively. However, choosing the right set of transpiler passes and the
right configuration for each pass is a challenging problem. Transpilers often
make critical decisions using heuristics since the ideal choices are impossible
to identify without knowing the target application outcome. Further, the
transpiler also makes simplifying assumptions about device noise that often do
not hold in the real world. As a result, we often see effects where the
fidelity of a target application decreases despite using state-of-the-art
optimisations. To overcome this challenge, we propose OPTRAN, a framework for
Choosing an Optimal Pass Set for Quantum Transpilation. OPTRAN uses classically
simulable quantum circuits composed entirely of Clifford gates, that resemble
the target application, to estimate how different passes interact with each
other in the context of the target application. OPTRAN then uses this
information to choose the optimal combination of passes that maximizes the
target application's fidelity when run on the actual device. Our experiments on
IBM machines show that OPTRAN improves fidelity by 87.66% of the maximum
possible limit over the baseline used by IBM Qiskit. We also propose low-cost
variants of OPTRAN, called OPTRAN-E-3 and OPTRAN-E-1 that improve fidelity by
78.33% and 76.66% of the maximum permissible limit over the baseline at a
58.33% and 69.44% reduction in cost compared to OPTRAN respectively