3,197 research outputs found
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments
We review developments, issues and challenges in Electrical Impedance
Tomography (EIT), for the 4th Workshop on Biomedical Applications of EIT,
Manchester 2003. We focus on the necessity for three dimensional data
collection and reconstruction, efficient solution of the forward problem and
present and future reconstruction algorithms. We also suggest common pitfalls
or ``inverse crimes'' to avoid.Comment: A review paper for the 4th Workshop on Biomedical Applications of
EIT, Manchester, UK, 200
Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging
Many graphics and vision problems can be expressed as non-linear least
squares optimizations of objective functions over visual data, such as images
and meshes. The mathematical descriptions of these functions are extremely
concise, but their implementation in real code is tedious, especially when
optimized for real-time performance on modern GPUs in interactive applications.
In this work, we propose a new language, Opt (available under
http://optlang.org), for writing these objective functions over image- or
graph-structured unknowns concisely and at a high level. Our compiler
automatically transforms these specifications into state-of-the-art GPU solvers
based on Gauss-Newton or Levenberg-Marquardt methods. Opt can generate
different variations of the solver, so users can easily explore tradeoffs in
numerical precision, matrix-free methods, and solver approaches. In our
results, we implement a variety of real-world graphics and vision applications.
Their energy functions are expressible in tens of lines of code, and produce
highly-optimized GPU solver implementations. These solver have performance
competitive with the best published hand-tuned, application-specific GPU
solvers, and orders of magnitude beyond a general-purpose auto-generated
solver
A Simple and Efficient Algorithm for Nonlinear Model Predictive Control
We present PANOC, a new algorithm for solving optimal control problems
arising in nonlinear model predictive control (NMPC). A usual approach to this
type of problems is sequential quadratic programming (SQP), which requires the
solution of a quadratic program at every iteration and, consequently, inner
iterative procedures. As a result, when the problem is ill-conditioned or the
prediction horizon is large, each outer iteration becomes computationally very
expensive. We propose a line-search algorithm that combines forward-backward
iterations (FB) and Newton-type steps over the recently introduced
forward-backward envelope (FBE), a continuous, real-valued, exact merit
function for the original problem. The curvature information of Newton-type
methods enables asymptotic superlinear rates under mild assumptions at the
limit point, and the proposed algorithm is based on very simple operations:
access to first-order information of the cost and dynamics and low-cost direct
linear algebra. No inner iterative procedure nor Hessian evaluation is
required, making our approach computationally simpler than SQP methods. The
low-memory requirements and simple implementation make our method particularly
suited for embedded NMPC applications
Elimination Techniques for Algorithmic Differentiation Revisited
All known elimination techniques for (first-order) algorithmic
differentiation (AD) rely on Jacobians to be given for a set of relevant
elemental functions. Realistically, elemental tangents and adjoints are given
instead. They can be obtained by applying software tools for AD to the parts of
a given modular numerical simulation. The novel generalized face elimination
rule proposed in this article facilitates the rigorous exploitation of
associativity of the chain rule of differentiation at arbitrary levels of
granularity ranging from elemental scalar (state of the art) to multivariate
vector functions with given elemental tangents and adjoints. The implied
combinatorial Generalized Face Elimination problem asks for a face elimination
sequence of minimal computational cost. Simple branch and bound and greedy
heuristic methods are employed as a baseline for further research into more
powerful algorithms motivated by promising first test results. The latter can
be reproduced with the help of an open-source reference implementation
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
- …