51,366 research outputs found
Search on a Hypercubic Lattice using a Quantum Random Walk: I. d>2
Random walks describe diffusion processes, where movement at every time step
is restricted to only the neighbouring locations. We construct a quantum random
walk algorithm, based on discretisation of the Dirac evolution operator
inspired by staggered lattice fermions. We use it to investigate the spatial
search problem, i.e. finding a marked vertex on a -dimensional hypercubic
lattice. The restriction on movement hardly matters for , and scaling
behaviour close to Grover's optimal algorithm (which has no restriction on
movement) can be achieved. Using numerical simulations, we optimise the
proportionality constants of the scaling behaviour, and demonstrate the
approach to that for Grover's algorithm (equivalent to the mean field theory or
the limit). In particular, the scaling behaviour for is only
about 25% higher than the optimal value.Comment: 11 pages, Revtex (v2) Introduction and references expanded. Published
versio
Entropic Wasserstein Gradient Flows
This article details a novel numerical scheme to approximate gradient flows
for optimal transport (i.e. Wasserstein) metrics. These flows have proved
useful to tackle theoretically and numerically non-linear diffusion equations
that model for instance porous media or crowd evolutions. These gradient flows
define a suitable notion of weak solutions for these evolutions and they can be
approximated in a stable way using discrete flows. These discrete flows are
implicit Euler time stepping according to the Wasserstein metric. A bottleneck
of these approaches is the high computational load induced by the resolution of
each step. Indeed, this corresponds to the resolution of a convex optimization
problem involving a Wasserstein distance to the previous iterate. Following
several recent works on the approximation of Wasserstein distances, we consider
a discrete flow induced by an entropic regularization of the transportation
coupling. This entropic regularization allows one to trade the initial
Wasserstein fidelity term for a Kulback-Leibler divergence, which is easier to
deal with numerically. We show how KL proximal schemes, and in particular
Dykstra's algorithm, can be used to compute each step of the regularized flow.
The resulting algorithm is both fast, parallelizable and versatile, because it
only requires multiplications by a Gibbs kernel. On Euclidean domains
discretized on an uniform grid, this corresponds to a linear filtering (for
instance a Gaussian filtering when is the squared Euclidean distance) which
can be computed in nearly linear time. On more general domains, such as
(possibly non-convex) shapes or on manifolds discretized by a triangular mesh,
following a recently proposed numerical scheme for optimal transport, this
Gibbs kernel multiplication is approximated by a short-time heat diffusion
Random Hamiltonian in thermal equilibrium
A framework for the investigation of disordered quantum systems in thermal
equilibrium is proposed. The approach is based on a dynamical model--which
consists of a combination of a double-bracket gradient flow and a uniform
Brownian fluctuation--that `equilibrates' the Hamiltonian into a canonical
distribution. The resulting equilibrium state is used to calculate quenched and
annealed averages of quantum observables.Comment: 8 pages, 4 figures. To appear in DICE 2008 conference proceeding
Multilevel Richardson-Romberg extrapolation
We propose and analyze a Multilevel Richardson-Romberg (MLRR) estimator which
combines the higher order bias cancellation of the Multistep Richardson-Romberg
method introduced in [Pa07] and the variance control resulting from the
stratification introduced in the Multilevel Monte Carlo (MLMC) method (see
[Hei01, Gi08]). Thus, in standard frameworks like discretization schemes of
diffusion processes, the root mean squared error (RMSE) can
be achieved with our MLRR estimator with a global complexity of
instead of with the standard MLMC method, at least when the weak
error of the biased implemented estimator
can be expanded at any order in and . The MLRR estimator is then halfway between a regular MLMC
and a virtual unbiased Monte Carlo. When the strong error , , the gain of MLRR over MLMC becomes even
more striking. We carry out numerical simulations to compare these estimators
in two settings: vanilla and path-dependent option pricing by Monte Carlo
simulation and the less classical Nested Monte Carlo simulation.Comment: 38 page
- …