736 research outputs found
The Fastest Mixing Markov Process on a Graph and a Connection to a Maximum Variance Unfolding Problem
We consider a Markov process on a connected graph, with edges labeled with transition rates between the adjacent vertices. The distribution of the Markov process converges to the uniform distribution at a rate determined by the second smallest eigenvalue lambda_2 of the Laplacian of the weighted graph. In this paper we consider the problem of assigning transition rates to the edges so as to maximize lambda_2 subject to a linear constraint on the rates. This is the problem of finding the fastest mixing Markov process (FMMP) on the graph. We show that the FMMP problem is a convex optimization problem, which can in turn be expressed as a semidefinite program, and therefore effectively solved numerically. We formulate a dual of the FMMP problem and show that it has a natural geometric interpretation as a maximum variance unfolding (MVU) problem, , the problem of choosing a set of points to be as far apart as possible, measured by their variance, while respecting local distance constraints. This MVU problem is closely related to a problem recently proposed by Weinberger and Saul as a method for "unfolding" high-dimensional data that lies on a low-dimensional manifold. The duality between the FMMP and MVU problems sheds light on both problems, and allows us to characterize and, in some cases, find optimal solutions
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
Quantum speedup of classical mixing processes
Most approximation algorithms for #P-complete problems (e.g., evaluating the
permanent of a matrix or the volume of a polytope) work by reduction to the
problem of approximate sampling from a distribution over a large set
. This problem is solved using the {\em Markov chain Monte Carlo} method: a
sparse, reversible Markov chain on with stationary distribution
is run to near equilibrium. The running time of this random walk algorithm, the
so-called {\em mixing time} of , is as shown
by Aldous, where is the spectral gap of and is the minimum
value of . A natural question is whether a speedup of this classical
method to , the diameter of the graph
underlying , is possible using {\em quantum walks}.
We provide evidence for this possibility using quantum walks that {\em
decohere} under repeated randomized measurements. We show: (a) decoherent
quantum walks always mix, just like their classical counterparts, (b) the
mixing time is a robust quantity, essentially invariant under any smooth form
of decoherence, and (c) the mixing time of the decoherent quantum walk on a
periodic lattice is , which is indeed
and is asymptotically no worse than the
diameter of (the obvious lower bound) up to at most a logarithmic
factor.Comment: 13 pages; v2 revised several part
Non-reversible Metropolis-Hastings
The classical Metropolis-Hastings (MH) algorithm can be extended to generate
non-reversible Markov chains. This is achieved by means of a modification of
the acceptance probability, using the notion of vorticity matrix. The resulting
Markov chain is non-reversible. Results from the literature on asymptotic
variance, large deviations theory and mixing time are mentioned, and in the
case of a large deviations result, adapted, to explain how non-reversible
Markov chains have favorable properties in these respects.
We provide an application of NRMH in a continuous setting by developing the
necessary theory and applying, as first examples, the theory to Gaussian
distributions in three and nine dimensions. The empirical autocorrelation and
estimated asymptotic variance for NRMH applied to these examples show
significant improvement compared to MH with identical stepsize.Comment: in Statistics and Computing, 201
Fast MCMC sampling algorithms on polytopes
We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and
the John walk, for generating samples from the uniform distribution over a
polytope. Both random walks are sampling algorithms derived from interior point
methods. The former is based on volumetric-logarithmic barrier introduced by
Vaidya whereas the latter uses John's ellipsoids. We show that the Vaidya walk
mixes in significantly fewer steps than the logarithmic-barrier based Dikin
walk studied in past work. For a polytope in defined by
linear constraints, we show that the mixing time from a warm start is bounded
as , compared to the mixing time
bound for the Dikin walk. The cost of each step of the Vaidya walk is of the
same order as the Dikin walk, and at most twice as large in terms of constant
pre-factors. For the John walk, we prove an
bound on its mixing time and conjecture
that an improved variant of it could achieve a mixing time of
. Additionally, we propose variants
of the Vaidya and John walks that mix in polynomial time from a deterministic
starting point. The speed-up of the Vaidya walk over the Dikin walk are
illustrated in numerical examples.Comment: 86 pages, 9 figures, First two authors contributed equall
- …