6,393 research outputs found

    Linearly Solvable Stochastic Control Lyapunov Functions

    Get PDF
    This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear partial differential equation can then be relaxed to a linear differential inclusion, allowing for relaxed solutions to be generated using sum of squares programming. The resulting relaxed solutions are in fact viscosity super/subsolutions, and by the maximum principle are pointwise upper and lower bounds to the underlying value function, even for coarse polynomial approximations. Furthermore, the pointwise upper bound is shown to be a stochastic control Lyapunov function, yielding a method for generating nonlinear controllers with pointwise bounded distance from the optimal cost when using the optimal controller. These approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems. Finally, this paper develops a-priori bounds on trajectory suboptimality when using these approximate value functions, as well as demonstrates that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing. Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio

    Natural preconditioners for saddle point systems

    Get PDF
    The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or discrete setting, so saddle point systems arising from discretization of partial differential equation problems such as those describing electromagnetic problems or incompressible flow lead to equations with this structure as does, for example, the widely used sequential quadratic programming approach to nonlinear optimization.\ud This article concerns iterative solution methods for these problems and in particular shows how the problem formulation leads to natural preconditioners which guarantee rapid convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness -- in terms of rapidity of convergence -- is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends

    Nonlinear Integer Programming

    Full text link
    Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G. Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50 Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art Surveys, Springer-Verlag, 2009, ISBN 354068274

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Best-fit quasi-equilibrium ensembles: a general approach to statistical closure of underresolved Hamiltonian dynamics

    Get PDF
    A new method of deriving reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a set of resolved variables that define a model reduction, the quasi-equilibrium ensembles associated with the resolved variables are employed as a family of trial probability densities on phase space. The residual that results from submitting these trial densities to the Liouville equation is quantified by an ensemble-averaged cost function related to the information loss rate of the reduction. From an initial nonequilibrium state, the statistical state of the system at any later time is estimated by minimizing the time integral of the cost function over paths of trial densities. Statistical closure of the underresolved dynamics is obtained at the level of the value function, which equals the optimal cost of reduction with respect to the resolved variables, and the evolution of the estimated statistical state is deduced from the Hamilton-Jacobi equation satisfied by the value function. In the near-equilibrium regime, or under a local quadratic approximation in the far-from-equilibrium regime, this best-fit closure is governed by a differential equation for the estimated state vector coupled to a Riccati differential equation for the Hessian matrix of the value function. Since memory effects are not explicitly included in the trial densities, a single adjustable parameter is introduced into the cost function to capture a time-scale ratio between resolved and unresolved motions. Apart from this parameter, the closed equations for the resolved variables are completely determined by the underlying deterministic dynamics
    • …
    corecore