1,156 research outputs found

    Certification aspects of the fast gradient method for solving the dual of parametric convex programs

    Get PDF
    This paper examines the computational complexity certification of the fast gradient method for the solution of the dual of a parametric convex program. To this end, a lower iteration bound is derived such that for all parameters from a compact set a solution with a specified level of suboptimality will be obtained. For its practical importance, the derivation of the smallest lower iteration bound is considered. In order to determine it, we investigate both the computation of the worst case minimal Euclidean distance between an initial iterate and a Lagrange multiplier and the issue of finding the largest step size for the fast gradient method. In addition, we argue that optimal preconditioning of the dual problem cannot be proven to decrease the smallest lower iteration bound. The findings of this paper are of importance in embedded optimization, for instance, in model predictive contro

    A Parametric Multi-Convex Splitting Technique with Application to Real-Time NMPC

    Get PDF
    A novel splitting scheme to solve parametric multiconvex programs is presented. It consists of a fixed number of proximal alternating minimisations and a dual update per time step, which makes it attractive in a real-time Nonlinear Model Predictive Control (NMPC) framework and for distributed computing environments. Assuming that the parametric program is semi-algebraic and that its KKT points are strongly regular, a contraction estimate is derived and it is proven that the sub-optimality error remains stable if two key parameters are tuned properly. Efficacy of the method is demonstrated by solving a bilinear NMPC problem to control a DC motor.Comment: To appear in Proceedings of the 53rd IEEE Conference on Decision and Control 201

    Newton-type Alternating Minimization Algorithm for Convex Optimization

    Full text link
    We propose NAMA (Newton-type Alternating Minimization Algorithm) for solving structured nonsmooth convex optimization problems where the sum of two functions is to be minimized, one being strongly convex and the other composed with a linear mapping. The proposed algorithm is a line-search method over a continuous, real-valued, exact penalty function for the corresponding dual problem, which is computed by evaluating the augmented Lagrangian at the primal points obtained by alternating minimizations. As a consequence, NAMA relies on exactly the same computations as the classical alternating minimization algorithm (AMA), also known as the dual proximal gradient method. Under standard assumptions the proposed algorithm possesses strong convergence properties, while under mild additional assumptions the asymptotic convergence is superlinear, provided that the search directions are chosen according to quasi-Newton formulas. Due to its simplicity, the proposed method is well suited for embedded applications and large-scale problems. Experiments show that using limited-memory directions in NAMA greatly improves the convergence speed over AMA and its accelerated variant

    Stability and Performance Verification of Optimization-based Controllers

    Get PDF
    This paper presents a method to verify closed-loop properties of optimization-based controllers for deterministic and stochastic constrained polynomial discrete-time dynamical systems. The closed-loop properties amenable to the proposed technique include global and local stability, performance with respect to a given cost function (both in a deterministic and stochastic setting) and the L2\mathcal{L}_2 gain. The method applies to a wide range of practical control problems: For instance, a dynamical controller (e.g., a PID) plus input saturation, model predictive control with state estimation, inexact model and soft constraints, or a general optimization-based controller where the underlying problem is solved with a fixed number of iterations of a first-order method are all amenable to the proposed approach. The approach is based on the observation that the control input generated by an optimization-based controller satisfies the associated Karush-Kuhn-Tucker (KKT) conditions which, provided all data is polynomial, are a system of polynomial equalities and inequalities. The closed-loop properties can then be analyzed using sum-of-squares (SOS) programming

    Credible Autocoding of Convex Optimization Algorithms

    Full text link
    The efficiency of modern optimization methods, coupled with increasing computational resources, has led to the possibility of real-time optimization algorithms acting in safety critical roles. There is a considerable body of mathematical proofs on on-line optimization programs which can be leveraged to assist in the development and verification of their implementation. In this paper, we demonstrate how theoretical proofs of real-time optimization algorithms can be used to describe functional properties at the level of the code, thereby making it accessible for the formal methods community. The running example used in this paper is a generic semi-definite programming (SDP) solver. Semi-definite programs can encode a wide variety of optimization problems and can be solved in polynomial time at a given accuracy. We describe a top-to-down approach that transforms a high-level analysis of the algorithm into useful code annotations. We formulate some general remarks about how such a task can be incorporated into a convex programming autocoder. We then take a first step towards the automatic verification of the optimization program by identifying key issues to be adressed in future work
    • …
    corecore