53,037 research outputs found
A Survey on Reinforcement Learning for Combinatorial Optimization
This paper gives a detailed review of reinforcement learning in combinatorial
optimization, introduces the history of combinatorial optimization starting in
the 1960s, and compares it with the reinforcement learning algorithms in recent
years. We explicitly look at a famous combinatorial problem known as the
Traveling Salesperson Problem (TSP). We compare the approach of the modern
reinforcement learning algorithms on TSP with an approach published in 1970.
Then, we discuss the similarities between these algorithms and how the approach
of reinforcement learning changes due to the evolution of machine learning
techniques and computing power. We also mention the deep learning approach on
the TSP, which is named Deep Reinforcement Learning. We argue that deep
learning is a generic approach that can be integrated with traditional
reinforcement learning algorithms and optimize the outcomes of the TSP.Comment: manuscript submitted to Management Scienc
An Incidence Geometry approach to Dictionary Learning
We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a
sparse representation of data points, by learning \emph{dictionary vectors}
upon which the data points can be written as sparse linear combinations. We
view this problem from a geometry perspective as the spanning set of a subspace
arrangement, and focus on understanding the case when the underlying hypergraph
of the subspace arrangement is specified. For this Fitted Dictionary Learning
problem, we completely characterize the combinatorics of the associated
subspace arrangements (i.e.\ their underlying hypergraphs). Specifically, a
combinatorial rigidity-type theorem is proven for a type of geometric incidence
system. The theorem characterizes the hypergraphs of subspace arrangements that
generically yield (a) at least one dictionary (b) a locally unique dictionary
(i.e.\ at most a finite number of isolated dictionaries) of the specified size.
We are unaware of prior application of combinatorial rigidity techniques in the
setting of Dictionary Learning, or even in machine learning. We also provide a
systematic classification of problems related to Dictionary Learning together
with various algorithms, their assumptions and performance
Faster quantum mixing for slowly evolving sequences of Markov chains
Markov chain methods are remarkably successful in computational physics,
machine learning, and combinatorial optimization. The cost of such methods
often reduces to the mixing time, i.e., the time required to reach the steady
state of the Markov chain, which scales as , the inverse of the
spectral gap. It has long been conjectured that quantum computers offer nearly
generic quadratic improvements for mixing problems. However, except in special
cases, quantum algorithms achieve a run-time of , which introduces a costly dependence on the Markov chain size
not present in the classical case. Here, we re-address the problem of mixing of
Markov chains when these form a slowly evolving sequence. This setting is akin
to the simulated annealing setting and is commonly encountered in physics,
material sciences and machine learning. We provide a quantum memory-efficient
algorithm with a run-time of ,
neglecting logarithmic terms, which is an important improvement for large state
spaces. Moreover, our algorithms output quantum encodings of distributions,
which has advantages over classical outputs. Finally, we discuss the run-time
bounds of mixing algorithms and show that, under certain assumptions, our
algorithms are optimal.Comment: 20 pages, 2 figure
- …