142 research outputs found
Reinforcement Learning Based on Real-Time Iteration NMPC
Reinforcement Learning (RL) has proven a stunning ability to learn optimal
policies from data without any prior knowledge on the process. The main
drawback of RL is that it is typically very difficult to guarantee stability
and safety. On the other hand, Nonlinear Model Predictive Control (NMPC) is an
advanced model-based control technique which does guarantee safety and
stability, but only yields optimality for the nominal model. Therefore, it has
been recently proposed to use NMPC as a function approximator within RL. While
the ability of this approach to yield good performance has been demonstrated,
the main drawback hindering its applicability is related to the computational
burden of NMPC, which has to be solved to full convergence. In practice,
however, computationally efficient algorithms such as the Real-Time Iteration
(RTI) scheme are deployed in order to return an approximate NMPC solution in
very short time. In this paper we bridge this gap by extending the existing
theoretical framework to also cover RL based on RTI NMPC. We demonstrate the
effectiveness of this new RL approach with a nontrivial example modeling a
challenging nonlinear system subject to stochastic perturbations with the
objective of optimizing an economic cost.Comment: accepted for the IFAC World Congress 202
An Inequality Constrained SL/QP Method for Minimizing the Spectral Abscissa
We consider a problem in eigenvalue optimization, in particular finding a
local minimizer of the spectral abscissa - the value of a parameter that
results in the smallest value of the largest real part of the spectrum of a
matrix system. This is an important problem for the stabilization of control
systems. Many systems require the spectra to lie in the left half plane in
order for them to be stable. The optimization problem, however, is difficult to
solve because the underlying objective function is nonconvex, nonsmooth, and
non-Lipschitz. In addition, local minima tend to correspond to points of
non-differentiability and locally non-Lipschitz behavior. We present a
sequential linear and quadratic programming algorithm that solves a series of
linear or quadratic subproblems formed by linearizing the surfaces
corresponding to the largest eigenvalues. We present numerical results
comparing the algorithms to the state of the art
Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity
We consider nonconvex constrained optimization problems and propose a new
approach to the convergence analysis based on penalty functions. We make use of
classical penalty functions in an unconventional way, in that penalty functions
only enter in the theoretical analysis of convergence while the algorithm
itself is penalty-free. Based on this idea, we are able to establish several
new results, including the first general analysis for diminishing stepsize
methods in nonconvex, constrained optimization, showing convergence to
generalized stationary points, and a complexity study for SQP-type algorithms.Comment: To appear on Mathematics of Operations Researc
Hybrid Random/Deterministic Parallel Algorithms for Nonconvex Big Data Optimization
We propose a decomposition framework for the parallel optimization of the sum
of a differentiable {(possibly nonconvex)} function and a nonsmooth (possibly
nonseparable), convex one. The latter term is usually employed to enforce
structure in the solution, typically sparsity. The main contribution of this
work is a novel \emph{parallel, hybrid random/deterministic} decomposition
scheme wherein, at each iteration, a subset of (block) variables is updated at
the same time by minimizing local convex approximations of the original
nonconvex function. To tackle with huge-scale problems, the (block) variables
to be updated are chosen according to a \emph{mixed random and deterministic}
procedure, which captures the advantages of both pure deterministic and random
update-based schemes. Almost sure convergence of the proposed scheme is
established. Numerical results show that on huge-scale problems the proposed
hybrid random/deterministic algorithm outperforms both random and deterministic
schemes.Comment: The order of the authors is alphabetica
- …