1,373 research outputs found

    A Statistical Learning Theory Approach for Uncertain Linear and Bilinear Matrix Inequalities

    Full text link
    In this paper, we consider the problem of minimizing a linear functional subject to uncertain linear and bilinear matrix inequalities, which depend in a possibly nonlinear way on a vector of uncertain parameters. Motivated by recent results in statistical learning theory, we show that probabilistic guaranteed solutions can be obtained by means of randomized algorithms. In particular, we show that the Vapnik-Chervonenkis dimension (VC-dimension) of the two problems is finite, and we compute upper bounds on it. In turn, these bounds allow us to derive explicitly the sample complexity of these problems. Using these bounds, in the second part of the paper, we derive a sequential scheme, based on a sequence of optimization and validation steps. The algorithm is on the same lines of recent schemes proposed for similar problems, but improves both in terms of complexity and generality. The effectiveness of this approach is shown using a linear model of a robot manipulator subject to uncertain parameters.Comment: 19 pages, 2 figures, Accepted for Publication in Automatic

    Chance Constrained Mixed Integer Program: Bilinear and Linear Formulations, and Benders Decomposition

    Full text link
    In this paper, we study chance constrained mixed integer program with consideration of recourse decisions and their incurred cost, developed on a finite discrete scenario set. Through studying a non-traditional bilinear mixed integer formulation, we derive its linear counterparts and show that they could be stronger than existing linear formulations. We also develop a variant of Jensen's inequality that extends the one for stochastic program. To solve this challenging problem, we present a variant of Benders decomposition method in bilinear form, which actually provides an easy-to-use algorithm framework for further improvements, along with a few enhancement strategies based on structural properties or Jensen's inequality. Computational study shows that the presented Benders decomposition method, jointly with appropriate enhancement techniques, outperforms a commercial solver by an order of magnitude on solving chance constrained program or detecting its infeasibility

    A scenario approach for non-convex control design

    Full text link
    Randomized optimization is an established tool for control design with modulated robustness. While for uncertain convex programs there exist randomized approaches with efficient sampling, this is not the case for non-convex problems. Approaches based on statistical learning theory are applicable to non-convex problems, but they usually are conservative in terms of performance and require high sample complexity to achieve the desired probabilistic guarantees. In this paper, we derive a novel scenario approach for a wide class of random non-convex programs, with a sample complexity similar to that of uncertain convex programs and with probabilistic guarantees that hold not only for the optimal solution of the scenario program, but for all feasible solutions inside a set of a-priori chosen complexity. We also address measure-theoretic issues for uncertain convex and non-convex programs. Among the family of non-convex control- design problems that can be addressed via randomization, we apply our scenario approach to randomized Model Predictive Control for chance-constrained nonlinear control-affine systems.Comment: Submitted to IEEE Transactions on Automatic Contro

    Robust Region-of-Attraction Estimation

    Get PDF
    We propose a method to compute invariant subsets of the region-of-attraction for asymptotically stable equilibrium points of polynomial dynamical systems with bounded parametric uncertainty. Parameter-independent Lyapunov functions are used to characterize invariant subsets of the robust region-of-attraction. A branch-and-bound type refinement procedure reduces the conservatism. We demonstrate the method on an example from the literature and uncertain controlled short-period aircraft dynamics

    Stochastic Variance Reduction Methods for Saddle-Point Problems

    Get PDF
    We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which is common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monotone operators to prove convergence, showing in particular that the same algorithm applies to a larger class of problems, such as variational inequalities, (b) there are two notions of splits, in terms of functions, or in terms of partial derivatives, (c) the split does need to be done with convex-concave terms, (d) non-uniform sampling is key to an efficient algorithm, both in theory and practice, and (e) these incremental algorithms can be easily accelerated using a simple extension of the "catalyst" framework, leading to an algorithm which is always superior to accelerated batch algorithms.Comment: Neural Information Processing Systems (NIPS), 2016, Barcelona, Spai

    Stability and Performance Verification of Optimization-based Controllers

    Get PDF
    This paper presents a method to verify closed-loop properties of optimization-based controllers for deterministic and stochastic constrained polynomial discrete-time dynamical systems. The closed-loop properties amenable to the proposed technique include global and local stability, performance with respect to a given cost function (both in a deterministic and stochastic setting) and the L2\mathcal{L}_2 gain. The method applies to a wide range of practical control problems: For instance, a dynamical controller (e.g., a PID) plus input saturation, model predictive control with state estimation, inexact model and soft constraints, or a general optimization-based controller where the underlying problem is solved with a fixed number of iterations of a first-order method are all amenable to the proposed approach. The approach is based on the observation that the control input generated by an optimization-based controller satisfies the associated Karush-Kuhn-Tucker (KKT) conditions which, provided all data is polynomial, are a system of polynomial equalities and inequalities. The closed-loop properties can then be analyzed using sum-of-squares (SOS) programming

    Finite-Time Control of Uncertain Linear Systems Using Statistical Learning Methods

    Get PDF
    In this paper we show how some difficult linear algebra problems can be “approximately” solved using statistical learning methods. We illustrate our results by considering the state and output feedback, finite-time robust stabilization problems for linear systems subject to time-varying norm-bounded uncertainties and to unknown disturbances. In the state feedback case, we have obtained in an earlier paper, a sufficient condition for finite-time stabilization in the presence of time-varying disturbances; such condition requires the solution of a Linear Matrix Inequality (LMI) feasibility problem, which is by now a standard application of linear algebraic methods. In the output feedback case, however, we end up with a Bilinear Matrix Inequality (BMI) problem which we attack by resorting to a statistical approach
    • …
    corecore