2,137 research outputs found

    Stochastic model predictive control of LPV systems via scenario optimization

    Get PDF
    A stochastic receding-horizon control approach for constrained Linear Parameter Varying discrete-time systems is proposed in this paper. It is assumed that the time-varying parameters have stochastic nature and that the system's matrices are bounded but otherwise arbitrary nonlinear functions of these parameters. No specific assumption on the statistics of the parameters is required. By using a randomization approach, a scenario-based finite-horizon optimal control problem is formulated, where only a finite number M of sampled predicted parameter trajectories (‘scenarios') are considered. This problem is convex and its solution is a priori guaranteed to be probabilistically robust, up to a user-defined probability level p. The p level is linked to M by an analytic relationship, which establishes a tradeoff between computational complexity and robustness of the solution. Then, a receding horizon strategy is presented, involving the iterated solution of a scenario-based finite-horizon control problem at each time step. Our key result is to show that the state trajectories of the controlled system reach a terminal positively invariant set in finite time, either deterministically, or with probability no smaller than p. The features of the approach are illustrated by a numerical example

    On Repetitive Scenario Design

    Get PDF
    Repetitive Scenario Design (RSD) is a randomized approach to robust design based on iterating two phases: a standard scenario design phase that uses NN scenarios (design samples), followed by randomized feasibility phase that uses NoN_o test samples on the scenario solution. We give a full and exact probabilistic characterization of the number of iterations required by the RSD approach for returning a solution, as a function of NN, NoN_o, and of the desired levels of probabilistic robustness in the solution. This novel approach broadens the applicability of the scenario technology, since the user is now presented with a clear tradeoff between the number NN of design samples and the ensuing expected number of repetitions required by the RSD algorithm. The plain (one-shot) scenario design becomes just one of the possibilities, sitting at one extreme of the tradeoff curve, in which one insists in finding a solution in a single repetition: this comes at the cost of possibly high NN. Other possibilities along the tradeoff curve use lower NN values, but possibly require more than one repetition

    Direct Data-Driven Portfolio Optimization with Guaranteed Shortfall Probability

    Get PDF
    This paper proposes a novel methodology for optimal allocation of a portfolio of risky financial assets. Most existing methods that aim at compromising between portfolio performance (e.g., expected return) and its risk (e.g., volatility or shortfall probability) need some statistical model of the asset returns. This means that: ({\em i}) one needs to make rather strong assumptions on the market for eliciting a return distribution, and ({\em ii}) the parameters of this distribution need be somehow estimated, which is quite a critical aspect, since optimal portfolios will then depend on the way parameters are estimated. Here we propose instead a direct, data-driven, route to portfolio optimization that avoids both of the mentioned issues: the optimal portfolios are computed directly from historical data, by solving a sequence of convex optimization problems (typically, linear programs). Much more importantly, the resulting portfolios are theoretically backed by a guarantee that their expected shortfall is no larger than an a-priori assigned level. This result is here obtained assuming efficiency of the market, under no hypotheses on the shape of the joint distribution of the asset returns, which can remain unknown and need not be estimate

    Robust Model Predictive Control via Scenario Optimization

    Full text link
    This paper discusses a novel probabilistic approach for the design of robust model predictive control (MPC) laws for discrete-time linear systems affected by parametric uncertainty and additive disturbances. The proposed technique is based on the iterated solution, at each step, of a finite-horizon optimal control problem (FHOCP) that takes into account a suitable number of randomly extracted scenarios of uncertainty and disturbances, followed by a specific command selection rule implemented in a receding horizon fashion. The scenario FHOCP is always convex, also when the uncertain parameters and disturbance belong to non-convex sets, and irrespective of how the model uncertainty influences the system's matrices. Moreover, the computational complexity of the proposed approach does not depend on the uncertainty/disturbance dimensions, and scales quadratically with the control horizon. The main result in this paper is related to the analysis of the closed loop system under receding-horizon implementation of the scenario FHOCP, and essentially states that the devised control law guarantees constraint satisfaction at each step with some a-priori assigned probability p, while the system's state reaches the target set either asymptotically, or in finite time with probability at least p. The proposed method may be a valid alternative when other existing techniques, either deterministic or stochastic, are not directly usable due to excessive conservatism or to numerical intractability caused by lack of convexity of the robust or chance-constrained optimization problem.Comment: This manuscript is a preprint of a paper accepted for publication in the IEEE Transactions on Automatic Control, with DOI: 10.1109/TAC.2012.2203054, and is subject to IEEE copyright. The copy of record will be available at http://ieeexplore.ieee.or

    A guaranteed-convergence framework for passivity enforcement of linear macromodels

    Get PDF
    Passivity enforcement is a key step in the extraction of linear macromodels of electrical interconnects and packages for Signal and Power Integrity applications. Most state-of-the-art techniques for passivity enforcement are based on suboptimal or approximate formulations that do not guarantee convergence. We introduce in this paper a new rigorous framework that casts passivity enforcement as a convex non-smooth optimization problem. Thanks to convexity, we are able to prove convergence to the optimal solution within a finite number of steps. The effectiveness of this approach is demonstrated through various numerical example

    Subgradient Techniques for Passivity Enforcement of Linear Device and Interconnect Macromodels

    Get PDF
    This paper presents a class of nonsmooth convex optimization methods for the passivity enforcement of reduced-order macromodels of electrical interconnects, packages, and linear passive devices. Model passivity can be lost during model extraction or identification from numerical field solutions or direct measurements. Nonpassive models may cause instabilities in transient system-level simulation, therefore a suitable postprocessing is necessary in order to eliminate any passivity violations. Different from leading numerical schemes on the subject, passivity enforcement is formulated here as a direct frequency-domain calHinfty{{cal H}_infty} norm minimization through perturbation of the model state-space parameters. Since the dependence of this norm on the parameters is nonsmooth, but continuous and convex, we resort to the use of subdifferentials and subgradients, which are used to devise two different algorithms. We provide a theoretical proof of the global optimality for the solution computed via both schemes. Numerical results confirm that these algorithms achieve the global optimum in a finite number of iterations within a prescribed accuracy leve

    Distributed Random Convex Programming via Constraints Consensus

    Full text link
    This paper discusses distributed approaches for the solution of random convex programs (RCP). RCPs are convex optimization problems with a (usually large) number N of randomly extracted constraints; they arise in several applicative areas, especially in the context of decision under uncertainty, see [2],[3]. We here consider a setup in which instances of the random constraints (the scenario) are not held by a single centralized processing unit, but are distributed among different nodes of a network. Each node "sees" only a small subset of the constraints, and may communicate with neighbors. The objective is to make all nodes converge to the same solution as the centralized RCP problem. To this end, we develop two distributed algorithms that are variants of the constraints consensus algorithm [4],[5]: the active constraints consensus (ACC) algorithm, and the vertex constraints consensus (VCC) algorithm. We show that the ACC algorithm computes the overall optimal solution in finite time, and with almost surely bounded communication at each iteration. The VCC algorithm is instead tailored for the special case in which the constraint functions are convex also w.r.t. the uncertain parameters, and it computes the solution in a number of iterations bounded by the diameter of the communication graph. We further devise a variant of the VCC algorithm, namely quantized vertex constraints consensus (qVCC), to cope with the case in which communication bandwidth among processors is bounded. We discuss several applications of the proposed distributed techniques, including estimation, classification, and random model predictive control, and we present a numerical analysis of the performance of the proposed methods. As a complementary numerical result, we show that the parallel computation of the scenario solution using ACC algorithm significantly outperforms its centralized equivalent
    corecore