3 research outputs found
An integrated tool-set for Control, Calibration and Characterization of quantum devices applied to superconducting qubits
Efforts to scale-up quantum computation have reached a point where the
principal limiting factor is not the number of qubits, but the entangling gate
infidelity. However, a highly detailed system characterization required to
understand the underlying errors is an arduous process and impractical with
increasing chip size. Open-loop optimal control techniques allow for the
improvement of gates but are limited by the models they are based on. To
rectify the situation, we provide a new integrated open-source tool-set for
Control, Calibration and Characterization (), capable of open-loop pulse
optimization, model-free calibration, model fitting and refinement. We present
a methodology to combine these tools to find a quantitatively accurate system
model, high-fidelity gates and an approximate error budget, all based on a
high-performance, feature-rich simulator. We illustrate our methods using
fixed-frequency superconducting qubits for which we learn model parameters to
an accuracy of and derive a coherence limited cross-resonance (CR) gate
that achieves fidelity without need for calibration.Comment: Source code available at http://q-optimize.org; added reference
Exploring the MLDA benchmark on the Nevergrad platform
This work presents the integration of the recently released benchmark suite MLDA into Nevergrad, a likewise recently released platform for derivative-free optimization. Benchmarking evolutionary and other optimization methods on this collection enables us to learn how algorithms deal with problems that are often treated by means of standard methods like clustering or gradient descent. As available computation power nowadays allows for running much 'slower' methods without noticing a performance difference it is an open question which of these standard methods may be replaced by derivative-free and (in terms of quality) better performing optimization algorithms. Additionally, most MLDA problems are suitable for landscape analysis and other means of understanding problem difficulty or algorithm behavior, due to their tangible nature. We present the open-source reimplementation of MLDA inside the Nevergrad platform and further discuss some first findings, which result from exploratory experiments with this platform. These include superior performance of advanced quasi-random sequences in some highly multimodal cases (even in non-parallel optimization), great performance of CMA for the perceptron and the Sammon tasks, success of DE on clustering problems, and straightforward implementations of highly competitive algorithm selection models by means of competence maps