731 research outputs found

    Al'brekht's Method in Infinite Dimensions

    Get PDF
    In 1961 E. G. Albrekht presented a method for the optimal stabilization of smooth, nonlinear, finite dimensional, continuous time control systems. This method has been extended to similar systems in discrete time and to some stochastic systems in continuous and discrete time. In this paper we extend Albrekht's method to the optimal stabilization of some smooth, nonlinear, infinite dimensional, continuous time control systems whose nonlinearities are described by Fredholm integral operators

    Series Solution of Discrete Time Stochastic Optimal Control Problems

    Get PDF
    In this paper we consider discrete time stochastic optimal control problems over infinite and finite time horizons. We show that for a large class of such problems the Taylor polynomials of the solutions to the associated Dynamic Programming Equations can be computed degree by degree.Comment: arXiv admin note: text overlap with arXiv:1806.0412

    Stochastic HJB Equations and Regular Singular Points

    Get PDF
    IIn this paper we show that some HJB equations arising from both finite and infinite horizon stochastic optimal control problems have a regular singular point at the origin. This makes them amenable to solution by power series techniques. This extends the work of Al'brecht who showed that the HJB equations of an infinite horizon deterministic optimal control problem can have a regular singular point at the origin, Al'brekht solved the HJB equations by power series, degree by degree. In particular, we show that the infinite horizon stochastic optimal control problem with linear dynamics, quadratic cost and bilinear noise leads to a new type of algebraic Riccati equation which we call the Stochastic Algebraic Riccati Equation (SARE). If SARE can be solved then one has a complete solution to this infinite horizon stochastic optimal control problem. We also show that a finite horizon stochastic optimal control problem with linear dynamics, quadratic cost and bilinear noise leads to a Stochastic Differential Riccati Equation (SDRE) that is well known. If these problems are the linear-quadratic-bilinear part of a nonlinear finite horizon stochastic optimal control problem then we show how the higher degree terms of the solutions can be computed degree by degree. To our knowledge this computation is new

    The patchy Method for the Infinite Horizon Hamilton-Jacobi-Bellman Equation and its Accuracy

    Get PDF
    We introduce a modification to the patchy method of Navasca and Krener for solving the stationary Hamilton Jacobi Bellman equation. The numerical solution that we generate is a set of polynomials that approximate the optimal cost and optimal control on a partition of the state space. We derive an error bound for our numerical method under the assumption that the optimal cost is a smooth strict Lyupanov function. The error bound is valid when the number of subsets in the partition is not too large.Comment: 50 pages, 5 figure

    Simplicial Nonlinear Principal Component Analysis

    Get PDF
    We present a new manifold learning algorithm that takes a set of data points lying on or near a lower dimensional manifold as input, possibly with noise, and outputs a simplicial complex that fits the data and the manifold. We have implemented the algorithm in the case where the input data can be triangulated. We provide triangulations of data sets that fall on the surface of a torus, sphere, swiss roll, and creased sheet embedded in a fifty dimensional space. We also discuss the theoretical justification of our algorithm.Comment: 21 pages, 6 figure

    Model Predictive Regulation

    Get PDF
    We show how optimal nonlinear regulation can be achieved in a model predictive control fashion

    Control bifurcations

    Get PDF
    A parametrized nonlinear differential equation can have multiple equilibria as the parameter is varied. A local bifurcation of a parametrized differential equation occurs at an equilibrium where there is a change in the topological character of the nearby solution curves. This typically happens because some eigenvalues of the parametrized linear approximating differential equation cross the imaginary axis and there is a change in stability of the equilibrium. The topological nature of the solutions is unchanged by smooth changes of state coordinates so these may be used to bring the differential equation into Poincare/spl acute/ normal form. From this normal form, the type of the bifurcation can be determined. For differential equations depending on a single parameter, the typical ways that the system can bifurcate are fully understood, e.g., the fold (or saddle node), the transcritical and the Hopf bifurcation. A nonlinear control system has multiple equilibria typically parametrized by the set value of the control. A control bifurcation of a nonlinear system typically occurs when its linear approximation loses stabilizability. The ways in which this can happen are understood through the appropriate normal forms. We present the quadratic and cubic normal forms of a scalar input nonlinear control system around an equilibrium point. These are the normal forms under quadratic and cubic change of state coordinates and invertible state feedback. The system need not be linearly controllable. We study some important control bifurcations, the analogues of the classical fold, transcritical and Hopf bifurcations
    • …
    corecore