3,075 research outputs found
A Convex Feasibility Approach to Anytime Model Predictive Control
This paper proposes to decouple performance optimization and enforcement of
asymptotic convergence in Model Predictive Control (MPC) so that convergence to
a given terminal set is achieved independently of how much performance is
optimized at each sampling step. By embedding an explicit decreasing condition
in the MPC constraints and thanks to a novel and very easy-to-implement convex
feasibility solver proposed in the paper, it is possible to run an outer
performance optimization algorithm on top of the feasibility solver and
optimize for an amount of time that depends on the available CPU resources
within the current sampling step (possibly going open-loop at a given sampling
step in the extreme case no resources are available) and still guarantee
convergence to the terminal set. While the MPC setup and the solver proposed in
the paper can deal with quite general classes of functions, we highlight the
synthesis method and show numerical results in case of linear MPC and
ellipsoidal and polyhedral terminal sets.Comment: 8 page
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
A Simple and Efficient Algorithm for Nonlinear Model Predictive Control
We present PANOC, a new algorithm for solving optimal control problems
arising in nonlinear model predictive control (NMPC). A usual approach to this
type of problems is sequential quadratic programming (SQP), which requires the
solution of a quadratic program at every iteration and, consequently, inner
iterative procedures. As a result, when the problem is ill-conditioned or the
prediction horizon is large, each outer iteration becomes computationally very
expensive. We propose a line-search algorithm that combines forward-backward
iterations (FB) and Newton-type steps over the recently introduced
forward-backward envelope (FBE), a continuous, real-valued, exact merit
function for the original problem. The curvature information of Newton-type
methods enables asymptotic superlinear rates under mild assumptions at the
limit point, and the proposed algorithm is based on very simple operations:
access to first-order information of the cost and dynamics and low-cost direct
linear algebra. No inner iterative procedure nor Hessian evaluation is
required, making our approach computationally simpler than SQP methods. The
low-memory requirements and simple implementation make our method particularly
suited for embedded NMPC applications
Newton-type Alternating Minimization Algorithm for Convex Optimization
We propose NAMA (Newton-type Alternating Minimization Algorithm) for solving
structured nonsmooth convex optimization problems where the sum of two
functions is to be minimized, one being strongly convex and the other composed
with a linear mapping. The proposed algorithm is a line-search method over a
continuous, real-valued, exact penalty function for the corresponding dual
problem, which is computed by evaluating the augmented Lagrangian at the primal
points obtained by alternating minimizations. As a consequence, NAMA relies on
exactly the same computations as the classical alternating minimization
algorithm (AMA), also known as the dual proximal gradient method. Under
standard assumptions the proposed algorithm possesses strong convergence
properties, while under mild additional assumptions the asymptotic convergence
is superlinear, provided that the search directions are chosen according to
quasi-Newton formulas. Due to its simplicity, the proposed method is well
suited for embedded applications and large-scale problems. Experiments show
that using limited-memory directions in NAMA greatly improves the convergence
speed over AMA and its accelerated variant
Optimal Active Control of a Wave Energy Converter
Abstract-This paper investigates optimal active control schemes applied to a point absorber wave energy converter within a receding horizon fashion. A variational formulation of the power maximization problem is adapted to solve the optimal control problem. The optimal control method is shown to be of a bang-bang type for a power take-off mechanism that incorporates both linear dampers and active control elements. We also consider a direct transcription of the optimal control problem as a general nonlinear program. A variation of the projected gradient optimization scheme is formulated and shown to be feasible and computationally inexpensive compared to a standard NLP solver. Since the system model is bilinear and the cost function is non-convex quadratic, the resulting optimization problem is not a convex quadratic program. Results will be compared with an optimal command latching method to demonstrate the improvement in absorbed power. Time domain simulations are generated under irregular sea conditions
- …