15 research outputs found

    A Method for Construction of Suboptimal Nonlinear Regulators

    Get PDF
    A method is proposed for an approximate construction of the optimal state regulator for an autonomous nonlinear system with quadratic performance index. This method is based upon the instantaneous linearization technique developed by Pearson. The nonlinear system is approximated by a state-dependent linear system. First, a theorem is established to give a necessary condition for Pearson's control law to be optimal. Secondly, by making use of this condition, a systematic procedure is presented to determine a suboptimal feedback control for a second-order system. A minimax algorithm is used to design the linearized model closely approximating the original system over all state variables considered. The validity of the present method is shown by examining typical examples

    Analytical Approximation Methods for the Stabilizing Solution of the Hamilton–Jacobi Equation

    Get PDF
    In this paper, two methods for approximating the stabilizing solution of the Hamilton–Jacobi equation are proposed using symplectic geometry and a Hamiltonian perturbation technique as well as stable manifold theory. The first method uses the fact that the Hamiltonian lifted system of an integrable system is also integrable and regards the corresponding Hamiltonian system of the Hamilton–Jacobi equation as an integrable Hamiltonian system with a perturbation caused by control. The second method directly approximates the stable flow of the Hamiltonian systems using a modification of stable manifold theory. Both methods provide analytical approximations of the stable Lagrangian submanifold from which the stabilizing solution is derived. Two examples illustrate the effectiveness of the methods.

    Markov Perfect Nash Equilibrium in stochastic differential games as solution of a generalized Euler Equations System

    Get PDF
    This paper gives a new method to characterize Markov Perfect Nash Equilibrium in stochastic differential games by means of a set of Generalized Euler Equations. Necessary and sufficient conditions are given

    Suboptimal compensation of gyroscopic coupling for inertia-wheel attitude control

    Get PDF
    Suboptimal compensation of gyroscopic coupling for inertia-wheel attitude control by mathematical technique

    Markov Perfect Nash Equilibrium in stochastic differential games as solution of a generalized Euler Equations System

    Get PDF
    This paper gives a new method to characterize Markov Perfect Nash Equilibrium in stochastic differential games by means of a set of Generalized Euler Equations. Necessary and sufficient conditions are given.Stochastic differential games, Dynamic programming, Hamilton–Jacobi–Bellman equation, Semilinear parabolic equation, Stochastic productive assets

    Suboptimal Design of a Nonlinear Feedback System

    Get PDF
    A method is developed for the approximate design of an optimal state regulator for a nonlinear system with quadratic performance index. The nonlinearity is taken to be a perturbation to the system. By making use of a power-series expansion in a small parameter, matrix equations are derived for the stepwise determination of a suboptimal feedback law. Given a polynomial nonlinearity of an arbitrary form, explicit solutions have been obtained for those matrix equations. A necessary and sufficient condition for the existence and uniqueness of the solution is also shown. Further, the performance analysis reveals the fact that the l-th order approximation in the feedback law results in the (2l+1)th order approximation to the optimal performance index. The method may effectively be used in a computer programmed computation

    On the synthesis of suboptimal, inertia-wheel attitude control systems

    Get PDF
    Suboptimal systems synthesis using motor-driven inertia wheels for attitude contro

    Una caracterización directa del control óptimo en problemas de control estocástico

    Get PDF
    El método clásico de resolución de problemas de control óptimo estocástico en tiempo continuo está basado en la ecuación de Hamilton-Jacobi-Bellman, que caracteriza a la función valor óptimo. En este trabajo probamos que, en problemas en los que el parámetro de difusión es independiente de las variables de control, éstas quedan caracterizadas de forma directa por un sistema semilineal de ecuaciones en derivadas parciales. Mediante este nuevo enfoque resolvemos explícitamente el problema homogéneo unidimensional y aplicamos los resultados obtenidos al estudio de un problema de gestión óptima de un recurso natural no renovable en ambiente de incertidumbre
    corecore