64 research outputs found

    Neural network control of nonstrict feedback and nonaffine nonlinear discrete-time systems with application to engine control

    Get PDF
    In this dissertation, neural networks (NN) approximate unknown nonlinear functions in the system equations, unknown control inputs, and cost functions for two different classes of nonlinear discrete-time systems. Employing NN in closed-loop feedback systems requires that weight update algorithms be stable...Controllers are developed and applied to a nonlinear, discrete-time system of equations for a spark ignition engine model to reduce the cyclic dispersion of heat release --Abstract, page iv

    Output feedback NN control for two classes of discrete-time systems with unknown control directions in a unified approach

    Get PDF
    10.1109/TNN.2008.2003290IEEE Transactions on Neural Networks19111873-1886ITNN

    A brief review of neural networks based learning and control and their applications for robots

    Get PDF
    As an imitation of the biological nervous systems, neural networks (NN), which are characterized with powerful learning ability, have been employed in a wide range of applications, such as control of complex nonlinear systems, optimization, system identification and patterns recognition etc. This article aims to bring a brief review of the state-of-art NN for the complex nonlinear systems. Recent progresses of NNs in both theoretical developments and practical applications are investigated and surveyed. Specifically, NN based robot learning and control applications were further reviewed, including NN based robot manipulator control, NN based human robot interaction and NN based behavior recognition and generation

    Lyapunov based optimal control of a class of nonlinear systems

    Get PDF
    Optimal control of nonlinear systems is in fact difficult since it requires the solution to the Hamilton-Jacobi-Bellman (HJB) equation which has no closed-form solution. In contrast to offline and/or online iterative schemes for optimal control, this dissertation in the form of five papers focuses on the design of iteration free, online optimal adaptive controllers for nonlinear discrete and continuous-time systems whose dynamics are completely or partially unknown even when the states not measurable. Thus, in Paper I, motivated by homogeneous charge compression ignition (HCCI) engine dynamics, a neural network-based infinite horizon robust optimal controller is introduced for uncertain nonaffine nonlinear discrete-time systems. First, the nonaffine system is transformed into an affine-like representation while the resulting higher order terms are mitigated by using a robust term. The optimal adaptive controller for the affinelike system solves HJB equation and identifies the system dynamics provided a target set point is given. Since it is difficult to define the set point a priori in Paper II, an extremum seeking control loop is designed while maximizing an uncertain output function. On the other hand, Paper III focuses on the infinite horizon online optimal tracking control of known nonlinear continuous-time systems in strict feedback form by using state and output feedback by relaxing the initial admissible controller requirement. Paper IV applies the optimal controller from Paper III to an underactuated helicopter attitude and position tracking problem. In Paper V, the optimal control of nonlinear continuous-time systems in strict feedback form from Paper III is revisited by using state and output feedback when the internal dynamics are unknown. Closed-loop stability is demonstrated for all the controller designs developed in this dissertation by using Lyapunov analysis --Abstract, page iv

    Output-feedback online optimal control for a class of nonlinear systems

    Full text link
    In this paper an output-feedback model-based reinforcement learning (MBRL) method for a class of second-order nonlinear systems is developed. The control technique uses exact model knowledge and integrates a dynamic state estimator within the model-based reinforcement learning framework to achieve output-feedback MBRL. Simulation results demonstrate the efficacy of the developed method
    corecore