16,545 research outputs found
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
Multistart Methods for Quantum Approximate Optimization
Hybrid quantum-classical algorithms such as the quantum approximate
optimization algorithm (QAOA) are considered one of the most promising
approaches for leveraging near-term quantum computers for practical
applications. Such algorithms are often implemented in a variational form,
combining classical optimization methods with a quantum machine to find
parameters to maximize performance. The quality of the QAOA solution depends
heavily on quality of the parameters produced by the classical optimizer.
Moreover, the presence of multiple local optima in the space of parameters
makes it harder for the classical optimizer. In this paper we study the use of
a multistart optimization approach within a QAOA framework to improve the
performance of quantum machines on important graph clustering problems. We also
demonstrate that reusing the optimal parameters from similar problems can
improve the performance of classical optimization methods, expanding on similar
results for MAXCUT
- …