3,435 research outputs found
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
An efficient null space inexact Newton method for hydraulic simulation of water distribution networks
Null space Newton algorithms are efficient in solving the nonlinear equations
arising in hydraulic analysis of water distribution networks. In this article,
we propose and evaluate an inexact Newton method that relies on partial updates
of the network pipes' frictional headloss computations to solve the linear
systems more efficiently and with numerical reliability. The update set
parameters are studied to propose appropriate values. Different null space
basis generation schemes are analysed to choose methods for sparse and
well-conditioned null space bases resulting in a smaller update set. The Newton
steps are computed in the null space by solving sparse, symmetric positive
definite systems with sparse Cholesky factorizations. By using the constant
structure of the null space system matrices, a single symbolic factorization in
the Cholesky decomposition is used multiple times, reducing the computational
cost of linear solves. The algorithms and analyses are validated using medium
to large-scale water network models.Comment: 15 pages, 9 figures, Preprint extension of Abraham and Stoianov, 2015
(https://dx.doi.org/10.1061/(ASCE)HY.1943-7900.0001089), September 2015.
Includes extended exposition, additional case studies and new simulations and
analysi
- …