51,366 research outputs found

    Search on a Hypercubic Lattice using a Quantum Random Walk: I. d>2

    Full text link
    Random walks describe diffusion processes, where movement at every time step is restricted to only the neighbouring locations. We construct a quantum random walk algorithm, based on discretisation of the Dirac evolution operator inspired by staggered lattice fermions. We use it to investigate the spatial search problem, i.e. finding a marked vertex on a dd-dimensional hypercubic lattice. The restriction on movement hardly matters for d>2d>2, and scaling behaviour close to Grover's optimal algorithm (which has no restriction on movement) can be achieved. Using numerical simulations, we optimise the proportionality constants of the scaling behaviour, and demonstrate the approach to that for Grover's algorithm (equivalent to the mean field theory or the dd\to\infty limit). In particular, the scaling behaviour for d=3d=3 is only about 25% higher than the optimal dd\to\infty value.Comment: 11 pages, Revtex (v2) Introduction and references expanded. Published versio

    Entropic Wasserstein Gradient Flows

    Full text link
    This article details a novel numerical scheme to approximate gradient flows for optimal transport (i.e. Wasserstein) metrics. These flows have proved useful to tackle theoretically and numerically non-linear diffusion equations that model for instance porous media or crowd evolutions. These gradient flows define a suitable notion of weak solutions for these evolutions and they can be approximated in a stable way using discrete flows. These discrete flows are implicit Euler time stepping according to the Wasserstein metric. A bottleneck of these approaches is the high computational load induced by the resolution of each step. Indeed, this corresponds to the resolution of a convex optimization problem involving a Wasserstein distance to the previous iterate. Following several recent works on the approximation of Wasserstein distances, we consider a discrete flow induced by an entropic regularization of the transportation coupling. This entropic regularization allows one to trade the initial Wasserstein fidelity term for a Kulback-Leibler divergence, which is easier to deal with numerically. We show how KL proximal schemes, and in particular Dykstra's algorithm, can be used to compute each step of the regularized flow. The resulting algorithm is both fast, parallelizable and versatile, because it only requires multiplications by a Gibbs kernel. On Euclidean domains discretized on an uniform grid, this corresponds to a linear filtering (for instance a Gaussian filtering when cc is the squared Euclidean distance) which can be computed in nearly linear time. On more general domains, such as (possibly non-convex) shapes or on manifolds discretized by a triangular mesh, following a recently proposed numerical scheme for optimal transport, this Gibbs kernel multiplication is approximated by a short-time heat diffusion

    Random Hamiltonian in thermal equilibrium

    Get PDF
    A framework for the investigation of disordered quantum systems in thermal equilibrium is proposed. The approach is based on a dynamical model--which consists of a combination of a double-bracket gradient flow and a uniform Brownian fluctuation--that `equilibrates' the Hamiltonian into a canonical distribution. The resulting equilibrium state is used to calculate quenched and annealed averages of quantum observables.Comment: 8 pages, 4 figures. To appear in DICE 2008 conference proceeding

    Multilevel Richardson-Romberg extrapolation

    Get PDF
    We propose and analyze a Multilevel Richardson-Romberg (MLRR) estimator which combines the higher order bias cancellation of the Multistep Richardson-Romberg method introduced in [Pa07] and the variance control resulting from the stratification introduced in the Multilevel Monte Carlo (MLMC) method (see [Hei01, Gi08]). Thus, in standard frameworks like discretization schemes of diffusion processes, the root mean squared error (RMSE) ε>0\varepsilon > 0 can be achieved with our MLRR estimator with a global complexity of ε2log(1/ε)\varepsilon^{-2} \log(1/\varepsilon) instead of ε2(log(1/ε))2\varepsilon^{-2} (\log(1/\varepsilon))^2 with the standard MLMC method, at least when the weak error E[Yh]E[Y0]\mathbf{E}[Y_h]-\mathbf{E}[Y_0] of the biased implemented estimator YhY_h can be expanded at any order in hh and YhY02=O(h12)\|Y_h - Y_0\|_2 = O(h^{\frac{1}{2}}). The MLRR estimator is then halfway between a regular MLMC and a virtual unbiased Monte Carlo. When the strong error YhY02=O(hβ2)\|Y_h - Y_0\|_2 = O(h^{\frac{\beta}{2}}), β<1\beta < 1, the gain of MLRR over MLMC becomes even more striking. We carry out numerical simulations to compare these estimators in two settings: vanilla and path-dependent option pricing by Monte Carlo simulation and the less classical Nested Monte Carlo simulation.Comment: 38 page
    corecore