2 research outputs found
Learning a quantum computer's capability using convolutional neural networks
The computational power of contemporary quantum processors is limited by
hardware errors that cause computations to fail. In principle, each quantum
processor's computational capabilities can be described with a capability
function that quantifies how well a processor can run each possible quantum
circuit (i.e., program), as a map from circuits to the processor's success
rates on those circuits. However, capability functions are typically unknown
and challenging to model, as the particular errors afflicting a specific
quantum processor are a priori unknown and difficult to completely
characterize. In this work, we investigate using artificial neural networks to
learn an approximation to a processor's capability function. We explore how to
define the capability function, and we explain how data for training neural
networks can be efficiently obtained for a capability function defined using
process fidelity. We then investigate using convolutional neural networks to
model a quantum computer's capability. Using simulations, we show that
convolutional neural networks can accurately model a processor's capability
when that processor experiences gate-dependent, time-dependent, and
context-dependent stochastic errors. We then discuss some challenges to
creating useful neural network capability models for experimental processors,
such as generalizing beyond training distributions and modelling the effects of
coherent errors. Lastly, we apply our neural networks to model the capabilities
of cloud-access quantum computing systems, obtaining moderate prediction
accuracy (average absolute error around 2-5%)
Goal-Oriented Bayesian Optimal Experimental Design for Nonlinear Models using Markov Chain Monte Carlo
Optimal experimental design (OED) provides a systematic approach to quantify
and maximize the value of experimental data. Under a Bayesian approach,
conventional OED maximizes the expected information gain (EIG) on model
parameters. However, we are often interested in not the parameters themselves,
but predictive quantities of interest (QoIs) that depend on the parameters in a
nonlinear manner. We present a computational framework of predictive
goal-oriented OED (GO-OED) suitable for nonlinear observation and prediction
models, which seeks the experimental design providing the greatest EIG on the
QoIs. In particular, we propose a nested Monte Carlo estimator for the QoI EIG,
featuring Markov chain Monte Carlo for posterior sampling and kernel density
estimation for evaluating the posterior-predictive density and its
Kullback-Leibler divergence from the prior-predictive. The GO-OED design is
then found by maximizing the EIG over the design space using Bayesian
optimization. We demonstrate the effectiveness of the overall nonlinear GO-OED
method, and illustrate its differences versus conventional non-GO-OED, through
various test problems and an application of sensor placement for source
inversion in a convection-diffusion field