132,528 research outputs found

    Linearly Solvable Stochastic Control Lyapunov Functions

    Get PDF
    This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear partial differential equation can then be relaxed to a linear differential inclusion, allowing for relaxed solutions to be generated using sum of squares programming. The resulting relaxed solutions are in fact viscosity super/subsolutions, and by the maximum principle are pointwise upper and lower bounds to the underlying value function, even for coarse polynomial approximations. Furthermore, the pointwise upper bound is shown to be a stochastic control Lyapunov function, yielding a method for generating nonlinear controllers with pointwise bounded distance from the optimal cost when using the optimal controller. These approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems. Finally, this paper develops a-priori bounds on trajectory suboptimality when using these approximate value functions, as well as demonstrates that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing. Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio

    Suboptimal stabilizing controllers for linearly solvable system

    Get PDF
    This paper presents a novel method to synthesize stochastic control Lyapunov functions for a class of nonlinear, stochastic control systems. In this work, the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation is transformed into a linear partial differential equation for a class of systems with a particular constraint on the stochastic disturbance. It is shown that this linear partial differential equation can be relaxed to a linear differential inclusion, allowing for approximating polynomial solutions to be generated using sum of squares programming. It is shown that the resulting solutions are stochastic control Lyapunov functions with a number of compelling properties. In particular, a-priori bounds on trajectory suboptimality are shown for these approximate value functions. The result is a technique whereby approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems

    Dynamic Programming for Positive Linear Systems with Linear Costs

    Full text link
    Recent work by Rantzer [Ran22] formulated a class of optimal control problems involving positive linear systems, linear stage costs, and linear constraints. It was shown that the associated Bellman's equation can be characterized by a finite-dimensional nonlinear equation, which is solved by linear programming. In this work, we report complementary theories for the same class of problems. In particular, we provide conditions under which the solution is unique, investigate properties of the optimal policy, study the convergence of value iteration, policy iteration, and optimistic policy iteration applied to such problems, and analyze the boundedness of the solution to the associated linear program. Apart from a form of the Frobenius-Perron theorem, the majority of our results are built upon generic dynamic programming theory applicable to problems involving nonnegative stage costs

    Computation of local ISS Lyapunov functions via linear programming

    Get PDF
    In this paper, we present a numerical algorithm for computing a local ISS Lyapunov function for systems which are locally input-to-state stable (ISS) on compact subsets of the state space. The algorithm relies on a linear programming problem and computes a continuous, piecewise affine ISS Lyapunov function on a simplicial grid covering the given compact set excluding a small neighborhood of the origin. We show that the ISS Lyapunov function delivered by the algorithm is a viscosity subsolution of a partial differential equation. Index Terms--Nonlinear systems, Local input-to-state stability, Local ISS Lyapunov function, Linear programming, Viscosity subsolutio

    Strong Metric (Sub)regularity of KKT Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization

    Full text link
    This work concerns the local convergence theory of Newton and quasi-Newton methods for convex-composite optimization: minimize f(x):=h(c(x)), where h is an infinite-valued proper convex function and c is C^2-smooth. We focus on the case where h is infinite-valued piecewise linear-quadratic and convex. Such problems include nonlinear programming, mini-max optimization, estimation of nonlinear dynamics with non-Gaussian noise as well as many modern approaches to large-scale data analysis and machine learning. Our approach embeds the optimality conditions for convex-composite optimization problems into a generalized equation. We establish conditions for strong metric subregularity and strong metric regularity of the corresponding set-valued mappings. This allows us to extend classical convergence of Newton and quasi-Newton methods to the broader class of non-finite valued piecewise linear-quadratic convex-composite optimization problems. In particular we establish local quadratic convergence of the Newton method under conditions that parallel those in nonlinear programming when h is non-finite valued piecewise linear
    corecore