4 research outputs found
Improved Convergence Bounds For Operator Splitting Algorithms With Rare Extreme Errors
In this paper, we improve upon our previous work[24,22] and establish
convergence bounds on the objective function values of approximate
proximal-gradient descent (AxPGD), approximate accelerated proximal-gradient
descent (AxAPGD) and approximate proximal ADMM (AxWLM-ADMM) schemes. We
consider approximation errors that manifest rare extreme events and we
propagate their effects through iterations. We establish probabilistic
asymptotic and non-asymptotic convergence bounds as functions of the range
(upper/lower bounds) and variance of approximation errors. We use the derived
bound to assess AxPGD in a sparse model predictive control of a spacecraft
system and compare its accuracy with previously derived bounds
Custom optimization algorithms for efficient hardware implementation
The focus is on real-time optimal decision making with application in advanced control
systems. These computationally intensive schemes, which involve the repeated solution of
(convex) optimization problems within a sampling interval, require more efficient computational
methods than currently available for extending their application to highly dynamical
systems and setups with resource-constrained embedded computing platforms.
A range of techniques are proposed to exploit synergies between digital hardware, numerical
analysis and algorithm design. These techniques build on top of parameterisable
hardware code generation tools that generate VHDL code describing custom computing
architectures for interior-point methods and a range of first-order constrained optimization
methods. Since memory limitations are often important in embedded implementations we
develop a custom storage scheme for KKT matrices arising in interior-point methods for
control, which reduces memory requirements significantly and prevents I/O bandwidth
limitations from affecting the performance in our implementations. To take advantage of
the trend towards parallel computing architectures and to exploit the special characteristics
of our custom architectures we propose several high-level parallel optimal control
schemes that can reduce computation time. A novel optimization formulation was devised
for reducing the computational effort in solving certain problems independent of the computing
platform used. In order to be able to solve optimization problems in fixed-point
arithmetic, which is significantly more resource-efficient than floating-point, tailored linear
algebra algorithms were developed for solving the linear systems that form the computational
bottleneck in many optimization methods. These methods come with guarantees
for reliable operation. We also provide finite-precision error analysis for fixed-point implementations
of first-order methods that can be used to minimize the use of resources while
meeting accuracy specifications. The suggested techniques are demonstrated on several
practical examples, including a hardware-in-the-loop setup for optimization-based control
of a large airliner.Open Acces
Newton-type Alternating Minimization Algorithm for Convex Optimization
We propose NAMA (Newton-type Alternating Minimization Algorithm) for solving
structured nonsmooth convex optimization problems where the sum of two
functions is to be minimized, one being strongly convex and the other composed
with a linear mapping. The proposed algorithm is a line-search method over a
continuous, real-valued, exact penalty function for the corresponding dual
problem, which is computed by evaluating the augmented Lagrangian at the primal
points obtained by alternating minimizations. As a consequence, NAMA relies on
exactly the same computations as the classical alternating minimization
algorithm (AMA), also known as the dual proximal gradient method. Under
standard assumptions the proposed algorithm possesses strong convergence
properties, while under mild additional assumptions the asymptotic convergence
is superlinear, provided that the search directions are chosen according to
quasi-Newton formulas. Due to its simplicity, the proposed method is well
suited for embedded applications and large-scale problems. Experiments show
that using limited-memory directions in NAMA greatly improves the convergence
speed over AMA and its accelerated variant