52,861 research outputs found

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Motion Planning of Uncertain Ordinary Differential Equation Systems

    Get PDF
    This work presents a novel motion planning framework, rooted in nonlinear programming theory, that treats uncertain fully and under-actuated dynamical systems described by ordinary differential equations. Uncertainty in multibody dynamical systems comes from various sources, such as: system parameters, initial conditions, sensor and actuator noise, and external forcing. Treatment of uncertainty in design is of paramount practical importance because all real-life systems are affected by it, and poor robustness and suboptimal performance result if it’s not accounted for in a given design. In this work uncertainties are modeled using Generalized Polynomial Chaos and are solved quantitatively using a least-square collocation method. The computational efficiency of this approach enables the inclusion of uncertainty statistics in the nonlinear programming optimization process. As such, the proposed framework allows the user to pose, and answer, new design questions related to uncertain dynamical systems. Specifically, the new framework is explained in the context of forward, inverse, and hybrid dynamics formulations. The forward dynamics formulation, applicable to both fully and under-actuated systems, prescribes deterministic actuator inputs which yield uncertain state trajectories. The inverse dynamics formulation is the dual to the forward dynamic, and is only applicable to fully-actuated systems; deterministic state trajectories are prescribed and yield uncertain actuator inputs. The inverse dynamics formulation is more computationally efficient as it requires only algebraic evaluations and completely avoids numerical integration. Finally, the hybrid dynamics formulation is applicable to under-actuated systems where it leverages the benefits of inverse dynamics for actuated joints and forward dynamics for unactuated joints; it prescribes actuated state and unactuated input trajectories which yield uncertain unactuated states and actuated inputs. The benefits of the ability to quantify uncertainty when planning the motion of multibody dynamic systems are illustrated through several case-studies. The resulting designs determine optimal motion plans—subject to deterministic and statistical constraints—for all possible systems within the probability space

    Inverse Optimization with Noisy Data

    Full text link
    Inverse optimization refers to the inference of unknown parameters of an optimization problem based on knowledge of its optimal solutions. This paper considers inverse optimization in the setting where measurements of the optimal solutions of a convex optimization problem are corrupted by noise. We first provide a formulation for inverse optimization and prove it to be NP-hard. In contrast to existing methods, we show that the parameter estimates produced by our formulation are statistically consistent. Our approach involves combining a new duality-based reformulation for bilevel programs with a regularization scheme that smooths discontinuities in the formulation. Using epi-convergence theory, we show the regularization parameter can be adjusted to approximate the original inverse optimization problem to arbitrary accuracy, which we use to prove our consistency results. Next, we propose two solution algorithms based on our duality-based formulation. The first is an enumeration algorithm that is applicable to settings where the dimensionality of the parameter space is modest, and the second is a semiparametric approach that combines nonparametric statistics with a modified version of our formulation. These numerical algorithms are shown to maintain the statistical consistency of the underlying formulation. Lastly, using both synthetic and real data, we demonstrate that our approach performs competitively when compared with existing heuristics

    Computationally Efficient Trajectory Optimization for Linear Control Systems with Input and State Constraints

    Full text link
    This paper presents a trajectory generation method that optimizes a quadratic cost functional with respect to linear system dynamics and to linear input and state constraints. The method is based on continuous-time flatness-based trajectory generation, and the outputs are parameterized using a polynomial basis. A method to parameterize the constraints is introduced using a result on polynomial nonpositivity. The resulting parameterized problem remains linear-quadratic and can be solved using quadratic programming. The problem can be further simplified to a linear programming problem by linearization around the unconstrained optimum. The method promises to be computationally efficient for constrained systems with a high optimization horizon. As application, a predictive torque controller for a permanent magnet synchronous motor which is based on real-time optimization is presented.Comment: Proceedings of the American Control Conference (ACC), pp. 1904-1909, San Francisco, USA, June 29 - July 1, 201

    Backstepping PDE Design: A Convex Optimization Approach

    Get PDF
    Abstract\u2014Backstepping design for boundary linear PDE is formulated as a convex optimization problem. Some classes of parabolic PDEs and a first-order hyperbolic PDE are studied, with particular attention to non-strict feedback structures. Based on the compactness of the Volterra and Fredholm-type operators involved, their Kernels are approximated via polynomial functions. The resulting Kernel-PDEs are optimized using Sumof- Squares (SOS) decomposition and solved via semidefinite programming, with sufficient precision to guarantee the stability of the system in the L2-norm. This formulation allows optimizing extra degrees of freedom where the Kernel-PDEs are included as constraints. Uniqueness and invertibility of the Fredholm-type transformation are proved for polynomial Kernels in the space of continuous functions. The effectiveness and limitations of the approach proposed are illustrated by numerical solutions of some Kernel-PDEs

    Sampling from a system-theoretic viewpoint: Part II - Noncausal solutions

    Get PDF
    This paper puts to use concepts and tools introduced in Part I to address a wide spectrum of noncausal sampling and reconstruction problems. Particularly, we follow the system-theoretic paradigm by using systems as signal generators to account for available information and system norms (L2 and L∞) as performance measures. The proposed optimization-based approach recovers many known solutions, derived hitherto by different methods, as special cases under different assumptions about acquisition or reconstructing devices (e.g., polynomial and exponential cardinal splines for fixed samplers and the Sampling Theorem and its modifications in the case when both sampler and interpolator are design parameters). We also derive new results, such as versions of the Sampling Theorem for downsampling and reconstruction from noisy measurements, the continuous-time invariance of a wide class of optimal sampling-and-reconstruction circuits, etcetera

    Inverse polynomial optimization

    Full text link
    We consider the inverse optimization problem associated with the polynomial program f^*=\min \{f(x): x\in K\}andagivencurrentfeasiblesolution and a given current feasible solution y\in K.Weprovideasystematicnumericalschemetocomputeaninverseoptimalsolution.Thatis,wecomputeapolynomial. We provide a systematic numerical scheme to compute an inverse optimal solution. That is, we compute a polynomial \tilde{f}(whichmaybeofsamedegreeas (which may be of same degree as fifdesired)withthefollowingproperties:(a) if desired) with the following properties: (a) yisaglobalminimizerof is a global minimizer of \tilde{f}on on KwithaPutinar′scertificatewithanaprioridegreebound with a Putinar's certificate with an a priori degree bound dfixed,and(b), fixed, and (b), \tilde{f}minimizes minimizes \Vert f-\tilde{f}\Vert(whichcanbethe (which can be the \ell_1,, \ell_2or or \ell_\infty−normofthecoefficients)overallpolynomialswithsuchproperties.Computing-norm of the coefficients) over all polynomials with such properties. Computing \tilde{f}_dreducestosolvingasemidefiniteprogramwhoseoptimalvaluealsoprovidesaboundonhowfaris reduces to solving a semidefinite program whose optimal value also provides a bound on how far is f(\y)fromtheunknownoptimalvalue from the unknown optimal value f^*.Thesizeofthesemidefiniteprogramcanbeadaptedtothecomputationalcapabilitiesavailable.Moreover,ifoneusesthe. The size of the semidefinite program can be adapted to the computational capabilities available. Moreover, if one uses the \ell_1−norm,then-norm, then \tilde{f}$ takes a simple and explicit canonical form. Some variations are also discussed.Comment: 25 pages; to appear in Math. Oper. Res; Rapport LAAS no. 1114
    • …
    corecore