326 research outputs found
VarSaw: Application-tailored Measurement Error Mitigation for Variational Quantum Algorithms
For potential quantum advantage, Variational Quantum Algorithms (VQAs) need
high accuracy beyond the capability of today's NISQ devices, and thus will
benefit from error mitigation. In this work we are interested in mitigating
measurement errors which occur during qubit measurements after circuit
execution and tend to be the most error-prone operations, especially
detrimental to VQAs. Prior work, JigSaw, has shown that measuring only small
subsets of circuit qubits at a time and collecting results across all such
subset circuits can reduce measurement errors. Then, running the entire
(global) original circuit and extracting the qubit-qubit measurement
correlations can be used in conjunction with the subsets to construct a
high-fidelity output distribution of the original circuit. Unfortunately, the
execution cost of JigSaw scales polynomially in the number of qubits in the
circuit, and when compounded by the number of circuits and iterations in VQAs,
the resulting execution cost quickly turns insurmountable.
To combat this, we propose VarSaw, which improves JigSaw in an
application-tailored manner, by identifying considerable redundancy in the
JigSaw approach for VQAs: spatial redundancy across subsets from different VQA
circuits and temporal redundancy across globals from different VQA iterations.
VarSaw then eliminates these forms of redundancy by commuting the subset
circuits and selectively executing the global circuits, reducing computational
cost (in terms of the number of circuits executed) over naive JigSaw for VQA by
25x on average and up to 1000x, for the same VQA accuracy. Further, it can
recover, on average, 45% of the infidelity from measurement errors in the noisy
VQA baseline. Finally, it improves fidelity by 55%, on average, over JigSaw for
a fixed computational budget. VarSaw can be accessed here:
https://github.com/siddharthdangwal/VarSaw.Comment: Appears at the International Conference on Architectural Support for
Programming Languages and Operating Systems (ASPLOS) 2024. First two authors
contributed equall
A Synergistic Compilation Workflow for Tackling Crosstalk in Quantum Machines
Near-term quantum systems tend to be noisy. Crosstalk noise has been
recognized as one of several major types of noises in superconducting Noisy
Intermediate-Scale Quantum (NISQ) devices. Crosstalk arises from the concurrent
execution of two-qubit gates on nearby qubits, such as \texttt{CX}. It might
significantly raise the error rate of gates in comparison to running them
individually. Crosstalk can be mitigated through scheduling or hardware machine
tuning. Prior scientific studies, however, manage crosstalk at a really late
phase in the compilation process, usually after hardware mapping is done. It
may miss great opportunities of optimizing algorithm logic, routing, and
crosstalk at the same time. In this paper, we push the envelope by considering
all these factors simultaneously at the very early compilation stage. We
propose a crosstalk-aware quantum program compilation framework called CQC that
can enhance crosstalk mitigation while achieving satisfactory circuit depth.
Moreover, we identify opportunities for translation from intermediate
representation to the circuit for application-specific crosstalk mitigation,
for instance, the \texttt{CX} ladder construction in variational quantum
eigensolvers (VQE). Evaluations through simulation and on real IBM-Q devices
show that our framework can significantly reduce the error rate by up to
6, with only 60\% circuit depth compared to state-of-the-art gate
scheduling approaches. In particular, for VQE, we demonstrate 49\% circuit
depth reduction with 9.6\% fidelity improvement over prior art on the H4
molecule using IBMQ Guadalupe. Our CQC framework will be released on GitHub
Scaling Qubit Readout with Hardware Efficient Machine Learning Architectures
Reading a qubit is a fundamental operation in quantum computing. It
translates quantum information into classical information enabling subsequent
classification to assign the qubit states `0' or `1'. Unfortunately, qubit
readout is one of the most error-prone and slowest operations on a
superconducting quantum processor. On state-of-the-art superconducting quantum
processors, readout errors can range from 1-10%. High readout accuracy is
essential for enabling high fidelity for near-term noisy quantum computers and
error-corrected quantum computers of the future.
Prior works have used machine-learning-assisted single-shot qubit-state
classification, where a deep neural network was used for more robust
discrimination by compensating for crosstalk errors. However, the neural
network size can limit the scalability of systems, especially if fast hardware
discrimination is required. This state-of-the-art baseline design cannot be
implemented on off-the-shelf FPGAs used for the control and readout of
superconducting qubits in most systems, which increases the overall readout
latency as discrimination has to be performed in software.
In this work, we propose HERQULES, a scalable approach to improve qubit-state
discrimination by using a hierarchy of matched filters in conjunction with a
significantly smaller and scalable neural network for qubit-state
discrimination. We achieve substantially higher readout accuracies (16.4%
relative improvement) than the baseline with a scalable design that can be
readily implemented on off-the-shelf FPGAs. We also show that HERQULES is more
versatile and can support shorter readout durations than the baseline design
without additional training overheads
Recommended from our members
Quantum Vulnerability Analysis to Guide Robust Quantum Computing System Design
While quantum computers provide exciting opportunities for information processing, they currently suffer from noise during computation that is not fully understood. Incomplete noise models have led to discrepancies between quantum program success rate (SR) estimates and actual machine outcomes. For example, the estimated probability of success (ESP) is the state-of-the-art metric used to gauge quantum program performance. The ESP suffers poor prediction since it fails to account for the unique combination of circuit structure, quantum state, and quantum computer properties specific to each program execution. Thus, an urgent need exists for a systematic approach that can elucidate various noise impacts and accurately and robustly predict quantum computer success rates, emphasizing application and device scaling. In this article, we propose quantum vulnerability analysis (QVA) to systematically quantify the error impact on quantum applications and address the gap between current success rate (SR) estimators and real quantum computer results. The QVA determines the cumulative quantum vulnerability (CQV) of the target quantum computation, which quantifies the quantum error impact based on the entire algorithm applied to the target quantum machine. By evaluating the CQV with well-known benchmarks on three 27-qubit quantum computers, the CQV success estimation outperforms the estimated probability of success state-of-the-art prediction technique by achieving on average six times less relative prediction error, with best cases at 30 times, for benchmarks with a real SR rate above 0.1%. Direct application of QVA has been provided that helps researchers choose a promising compiling strategy at compile time
Reliability of IBMâs Public Quantum Computers
One of the challenges of the current ecosystem of quantum computers (QC) is the stabilization of the coherence associated with the entanglement of the states of their inner qubits. In this empirical study, we monitor the reliability of IBMâs public-access QCs network on a daily basis. Each of these state-of-the-art machines has a totally different qubit association, and this entails that for a given (same) input program, they may output a different set of probabilities for the assembly of results (including both the right and the wrong ones). Although we focus on the computing structure provided by the âBig Blueâ company, our survey can be easily transferred to other currently available quantum mainframes. In more detail, we probe these quantum processors with an ad hoc designed computationally demanding quaternary search algorithm. As stated, this quantum program is executed every 24 hours (for nearly 100 days) and its goal is to put to the limit the operational capacity of this novel and genuine type of equipment. Next, we perform a comparative analysis of the obtained results according to the singularities of each computer and over the total number of executions. In addition, we subsequently apply (for 50 days) an improvement filtering to perform noise mitigation on the results obtained proposed by IBM. The Yorktown 5-qubit computer reaches noise filtering of up to 33% in one day, that is, a 90% confidence level is reached in the expected results. From our continuous and long-term tests, we derive that room still exists regarding the improvement of quantum calculators in order to guarantee enough confidence in the returned outcomes
Classical Optimizers for Noisy Intermediate-Scale Quantum Devices
We present a collection of optimizers tuned for usage on Noisy Intermediate-Scale Quantum (NISQ) devices. Optimizers have a range of applications in quantum computing, including the Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization (QAOA) algorithms. They are also used for calibration tasks, hyperparameter tuning, in machine learning, etc. We analyze the efficiency and effectiveness of different optimizers in a VQE case study. VQE is a hybrid algorithm, with a classical minimizer step driving the next evaluation on the quantum processor. While most results to date concentrated on tuning the quantum VQE circuit, we show that, in the presence of quantum noise, the classical minimizer step needs to be carefully chosen to obtain correct results. We explore state-of-the-art gradient-free optimizers capable of handling noisy, black-box, cost functions and stress-test them using a quantum circuit simulation environment with noise injection capabilities on individual gates. Our results indicate that specifically tuned optimizers are crucial to obtaining valid science results on NISQ hardware, and will likely remain necessary even for future fault tolerant circuits
- âŠ