1,139 research outputs found
An Improved Classical Singular Value Transformation for Quantum Machine Learning
We study quantum speedups in quantum machine learning (QML) by analyzing the
quantum singular value transformation (QSVT) framework. QSVT, introduced by
[GSLW, STOC'19, arXiv:1806.01838], unifies all major types of quantum speedup;
in particular, a wide variety of QML proposals are applications of QSVT on
low-rank classical data. We challenge these proposals by providing a classical
algorithm that matches the performance of QSVT in this regime up to a small
polynomial overhead.
We show that, given a matrix , a vector , a bounded degree- polynomial , and linear-time
pre-processing, we can output a description of a vector such that in time. This improves upon the
best known classical algorithm [CGLLTW, STOC'20, arXiv:1910.06151], which
requires time, and narrows the gap with QSVT, which, after linear-time
pre-processing to load input into a quantum-accessible memory, can estimate the
magnitude of an entry to error in
time.
Our key insight is to combine the Clenshaw recurrence, an iterative method
for computing matrix polynomials, with sketching techniques to simulate QSVT
classically. We introduce several new classical techniques in this work,
including (a) a non-oblivious matrix sketch for approximately preserving
bi-linear forms, (b) a new stability analysis for the Clenshaw recurrence, and
(c) a new technique to bound arithmetic progressions of the coefficients
appearing in the Chebyshev series expansion of bounded functions, each of which
may be of independent interest.Comment: 62 pages, v3: fixed bug, runtime exponent now 11 instead of 9; v2:
revised abstract to clarify result
Reduced Order and Surrogate Models for Gravitational Waves
We present an introduction to some of the state of the art in reduced order
and surrogate modeling in gravitational wave (GW) science. Approaches that we
cover include Principal Component Analysis, Proper Orthogonal Decomposition,
the Reduced Basis approach, the Empirical Interpolation Method, Reduced Order
Quadratures, and Compressed Likelihood evaluations. We divide the review into
three parts: representation/compression of known data, predictive models, and
data analysis. The targeted audience is that one of practitioners in GW
science, a field in which building predictive models and data analysis tools
that are both accurate and fast to evaluate, especially when dealing with large
amounts of data and intensive computations, are necessary yet can be
challenging. As such, practical presentations and, sometimes, heuristic
approaches are here preferred over rigor when the latter is not available. This
review aims to be self-contained, within reasonable page limits, with little
previous knowledge (at the undergraduate level) requirements in mathematics,
scientific computing, and other disciplines. Emphasis is placed on optimality,
as well as the curse of dimensionality and approaches that might have the
promise of beating it. We also review most of the state of the art of GW
surrogates. Some numerical algorithms, conditioning details, scalability,
parallelization and other practical points are discussed. The approaches
presented are to large extent non-intrusive and data-driven and can therefore
be applicable to other disciplines. We close with open challenges in high
dimension surrogates, which are not unique to GW science.Comment: Invited article for Living Reviews in Relativity. 93 page
- …