605 research outputs found
Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics
Quantum computing is powerful because unitary operators describing the
time-evolution of a quantum system have exponential size in terms of the number
of qubits present in the system. We develop a new "Singular value
transformation" algorithm capable of harnessing this exponential advantage,
that can apply polynomial transformations to the singular values of a block of
a unitary, generalizing the optimal Hamiltonian simulation results of Low and
Chuang. The proposed quantum circuits have a very simple structure, often give
rise to optimal algorithms and have appealing constant factors, while usually
only use a constant number of ancilla qubits. We show that singular value
transformation leads to novel algorithms. We give an efficient solution to a
certain "non-commutative" measurement problem and propose a new method for
singular value estimation. We also show how to exponentially improve the
complexity of implementing fractional queries to unitaries with a gapped
spectrum. Finally, as a quantum machine learning application we show how to
efficiently implement principal component regression. "Singular value
transformation" is conceptually simple and efficient, and leads to a unified
framework of quantum algorithms incorporating a variety of quantum speed-ups.
We illustrate this by showing how it generalizes a number of prominent quantum
algorithms, including: optimal Hamiltonian simulation, implementing the
Moore-Penrose pseudoinverse with exponential precision, fixed-point amplitude
amplification, robust oblivious amplitude amplification, fast QMA
amplification, fast quantum OR lemma, certain quantum walk results and several
quantum machine learning algorithms. In order to exploit the strengths of the
presented method it is useful to know its limitations too, therefore we also
prove a lower bound on the efficiency of singular value transformation, which
often gives optimal bounds.Comment: 67 pages, 1 figur
Practical parallel self-testing of Bell states via magic rectangles
Self-testing is a method to verify that one has a particular quantum state
from purely classical statistics. For practical applications, such as
device-independent delegated verifiable quantum computation, it is crucial that
one self-tests multiple Bell states in parallel while keeping the quantum
capabilities required of one side to a minimum. In this work, we use the magic rectangle games (generalizations of the magic square game) to
obtain a self-test for Bell states where the one side needs only to measure
single-qubit Pauli observables. The protocol requires small input sizes
(constant for Alice and bits for Bob) and is robust with robustness
, where is the closeness of the
ideal (perfect) correlations to those observed. To achieve the desired
self-test we introduce a one-side-local quantum strategy for the magic square
game that wins with certainty, generalize this strategy to the family of magic rectangle games, and supplement these nonlocal games with extra
check rounds (of single and pairs of observables).Comment: 29 pages, 6 figures; v3 minor corrections and changes in response to
comment
A regularized nonnegative canonical polyadic decomposition algorithm with preprocessing for 3D fluorescence spectroscopy
International audienceWe consider blind source separation in chemical analysis focussing on the 3D fluorescence spectroscopy framework. We present an alternative method to process the Fluorescence Excitation-Emission Matrices (FEEM): first, a preprocessing is applied to eliminate the Raman and Rayleigh scattering peaks that clutter the FEEM. To improve its robustness versus possible improper settings, we suggest to associate the classical Zepp's method with a morphological image filtering technique. Then, in a second stage, the Canonical Polyadic (CP or Cande-comp/Parafac) decomposition of a nonnegative 3-way array has to be computed. In the fluorescence spectroscopy context, the constituent vectors of the loading matrices should be nonnegative (since standing for spectra and concentrations). Thus, we suggest a new NonNegative third order CP decomposition algorithm (NNCP) based on a non linear conjugate gradient optimisation algorithm with regularization terms and periodic restarts. Computer simulations performed on real experimental data are provided to enlighten the effectiveness and robustness of the whole processing chain and to validate the approach
Parameter Estimation of Gaussian Stationary Processes using the Generalized Method of Moments
We consider the class of all stationary Gaussian process with explicit
parametric spectral density. Under some conditions on the autocovariance
function, we defined a GMM estimator that satisfies consistency and asymptotic
normality, using the Breuer-Major theorem and previous results on ergodicity.
This result is applied to the joint estimation of the three parameters of a
stationary Ornstein-Uhlenbeck (fOU) process driven by a fractional Brownian
motion. The asymptotic normality of its GMM estimator applies for any H in
(0,1) and under some restrictions on the remaining parameters. A numerical
study is performed in the fOU case, to illustrate the estimator's practical
performance when the number of datapoints is moderate
A sparse-grid isogeometric solver
Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS
as a basis for the approximation of the solution of PDEs. In this work, we
investigate to which extent IGA solvers can benefit from the so-called
sparse-grids construction in its combination technique form, which was first
introduced in the early 90s in the context of the approximation of
high-dimensional PDEs. The tests that we report show that, in accordance to the
literature, a sparse-grid construction can indeed be useful if the solution of
the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the
case of non-smooth solutions when some a-priori knowledge on the location of
the singularities of the solution can be exploited to devise suitable
non-equispaced meshes. Finally, we remark that sparse grids can be seen as a
simple way to parallelize pre-existing serial IGA solvers in a straightforward
fashion, which can be beneficial in many practical situations.Comment: updated version after revie
Higher order influence functions and minimax estimation of nonlinear functionals
We present a theory of point and interval estimation for nonlinear
functionals in parametric, semi-, and non-parametric models based on higher
order influence functions (Robins (2004), Section 9; Li et al. (2004), Tchetgen
et al. (2006), Robins et al. (2007)). Higher order influence functions are
higher order U-statistics. Our theory extends the first order semiparametric
theory of Bickel et al. (1993) and van der Vaart (1991) by incorporating the
theory of higher order scores considered by Pfanzagl (1990), Small and McLeish
(1994) and Lindsay and Waterman (1996). The theory reproduces many previous
results, produces new non- results, and opens up the ability to
perform optimal non- inference in complex high dimensional models. We
present novel rate-optimal point and interval estimators for various
functionals of central importance to biostatistics in settings in which
estimation at the expected rate is not possible, owing to the curse
of dimensionality. We also show that our higher order influence functions have
a multi-robustness property that extends the double robustness property of
first order influence functions described by Robins and Rotnitzky (2001) and
van der Laan and Robins (2003).Comment: Published in at http://dx.doi.org/10.1214/193940307000000527 the IMS
Collections (http://www.imstat.org/publications/imscollections.htm) by the
Institute of Mathematical Statistics (http://www.imstat.org
Generalized Integrated Brownian Fields for Simulation Metamodeling
We introduce a novel class of Gaussian random fields (GRFs), called generalized integrated Brownian fields (GIBFs), focusing on the use of GIBFs for Gaussian process regression in deterministic and stochastic simulation metamodeling. We build GIBFs from the well-known Brownian motion and discuss several of their properties, including differentiability that cart differ in each coordinate, no mean reversion, and the Markov property. We explain why we desire to use GRFs with these properties and provide formal definitions of mean reversion and the Markov property for real-valued, differentiable random fields. We show how to use GIBFs with stochastic kriging, covering trend modeling and parameter fitting, discuss their approximation capability, and show that the resulting metamodel also has differentiability that can differ in each coordinate. Last, we use several examples to demonstrate superior prediction capability as compared with the GRFs corresponding to the Gaussian and Matern covariance functions
- …