175 research outputs found

    A brief review of neural networks based learning and control and their applications for robots

    Get PDF
    As an imitation of the biological nervous systems, neural networks (NN), which are characterized with powerful learning ability, have been employed in a wide range of applications, such as control of complex nonlinear systems, optimization, system identification and patterns recognition etc. This article aims to bring a brief review of the state-of-art NN for the complex nonlinear systems. Recent progresses of NNs in both theoretical developments and practical applications are investigated and surveyed. Specifically, NN based robot learning and control applications were further reviewed, including NN based robot manipulator control, NN based human robot interaction and NN based behavior recognition and generation

    Enhancing the performance of intelligent control systems in the face of higher levels of complexity and uncertainty

    Get PDF
    Modern advances in technology have led to more complex manufacturing processes whose success centres on the ability to control these processes with a very high level of accuracy. Plant complexity inevitably leads to poor models that exhibit a high degree of parametric or functional uncertainty. The situation becomes even more complex if the plant to be controlled is characterised by a multivalued function or even if it exhibits a number of modes of behaviour during its operation. Since an intelligent controller is expected to operate and guarantee the best performance where complexity and uncertainty coexist and interact, control engineers and theorists have recently developed new control techniques under the framework of intelligent control to enhance the performance of the controller for more complex and uncertain plants. These techniques are based on incorporating model uncertainty. The newly developed control algorithms for incorporating model uncertainty are proven to give more accurate control results under uncertain conditions. In this paper, we survey some approaches that appear to be promising for enhancing the performance of intelligent control systems in the face of higher levels of complexity and uncertainty

    Discrete-time Optimal Adaptive RBFNN Control for Robot Manipulators with Uncertain Dynamics

    Get PDF
    In this paper, a novel optimal adaptive radial basis function neural network (RBFNN) control has been investigated for a class of multiple-input-multiple-output (MIMO) nonlinear robot manipulators with uncertain dynamics in discrete time. To facilitate digital implementations of the robot controller, a robot model in discrete time has been employed. A high order uncertain robot model is able to be transformed to a predictor form, and a feedback control system has been then developed without noncausal problem in discrete time. The controller has been designed by an adaptive neural network (NN) based on the feedback system. The adaptive RBFNN robot control system has been investigated by a critic RBFNN and an actor RBFNN to approximate a desired control and a strategic utility function, respectively. The rigorous Lyapunov analysis is used to establish uniformly ultimate boundedness (UUB) of closed-loop signals, and the high-quality dynamic performance against uncertainties and disturbances is obtained by appropriately selecting the controller parameters. Simulation studies validate that the proposed control scheme has performed better than other available methods currently, for robot manipulators

    Stable Adaptive Control Using New Critic Designs

    Full text link
    Classical adaptive control proves total-system stability for control of linear plants, but only for plants meeting very restrictive assumptions. Approximate Dynamic Programming (ADP) has the potential, in principle, to ensure stability without such tight restrictions. It also offers nonlinear and neural extensions for optimal control, with empirically supported links to what is seen in the brain. However, the relevant ADP methods in use today -- TD, HDP, DHP, GDHP -- and the Galerkin-based versions of these all have serious limitations when used here as parallel distributed real-time learning systems; either they do not possess quadratic unconditional stability (to be defined) or they lead to incorrect results in the stochastic case. (ADAC or Q-learning designs do not help.) After explaining these conclusions, this paper describes new ADP designs which overcome these limitations. It also addresses the Generalized Moving Target problem, a common family of static optimization problems, and describes a way to stabilize large-scale economic equilibrium models, such as the old long-term energy model of DOE.Comment: Includes general reviews of alternative control technologies and reinforcement learning. 4 figs, >70p., >200 eqs. Implementation details, stability analysis. Included in 9/24/98 patent disclosure. pdf version uploaded 2012, based on direct conversion of the original word/html file, because of issues of format compatabilit

    Learning-based Predictive Control for Nonlinear Systems with Unknown Dynamics Subject to Safety Constraints

    Full text link
    Model predictive control (MPC) has been widely employed as an effective method for model-based constrained control. For systems with unknown dynamics, reinforcement learning (RL) and adaptive dynamic programming (ADP) have received notable attention to solve the adaptive optimal control problems. Recently, works on the use of RL in the framework of MPC have emerged, which can enhance the ability of MPC for data-driven control. However, the safety under state constraints and the closed-loop robustness are difficult to be verified due to approximation errors of RL with function approximation structures. Aiming at the above problem, we propose a data-driven robust MPC solution based on incremental RL, called data-driven robust learning-based predictive control (dr-LPC), for perturbed unknown nonlinear systems subject to safety constraints. A data-driven robust MPC (dr-MPC) is firstly formulated with a learned predictor. The incremental Dual Heuristic Programming (DHP) algorithm using an actor-critic architecture is then utilized to solve the online optimization problem of dr-MPC. In each prediction horizon, the actor and critic learn time-varying laws for approximating the optimal control policy and costate respectively, which is different from classical MPCs. The state and control constraints are enforced in the learning process via building a Hamilton-Jacobi-Bellman (HJB) equation and a regularized actor-critic learning structure using logarithmic barrier functions. The closed-loop robustness and safety of the dr-LPC are proven under function approximation errors. Simulation results on two control examples have been reported, which show that the dr-LPC can outperform the DHP and dr-MPC in terms of state regulation, and its average computational time is much smaller than that with the dr-MPC in both examples.Comment: The paper has been submitted at a IEEE Journal for possible publicatio

    Lyapunov based optimal control of a class of nonlinear systems

    Get PDF
    Optimal control of nonlinear systems is in fact difficult since it requires the solution to the Hamilton-Jacobi-Bellman (HJB) equation which has no closed-form solution. In contrast to offline and/or online iterative schemes for optimal control, this dissertation in the form of five papers focuses on the design of iteration free, online optimal adaptive controllers for nonlinear discrete and continuous-time systems whose dynamics are completely or partially unknown even when the states not measurable. Thus, in Paper I, motivated by homogeneous charge compression ignition (HCCI) engine dynamics, a neural network-based infinite horizon robust optimal controller is introduced for uncertain nonaffine nonlinear discrete-time systems. First, the nonaffine system is transformed into an affine-like representation while the resulting higher order terms are mitigated by using a robust term. The optimal adaptive controller for the affinelike system solves HJB equation and identifies the system dynamics provided a target set point is given. Since it is difficult to define the set point a priori in Paper II, an extremum seeking control loop is designed while maximizing an uncertain output function. On the other hand, Paper III focuses on the infinite horizon online optimal tracking control of known nonlinear continuous-time systems in strict feedback form by using state and output feedback by relaxing the initial admissible controller requirement. Paper IV applies the optimal controller from Paper III to an underactuated helicopter attitude and position tracking problem. In Paper V, the optimal control of nonlinear continuous-time systems in strict feedback form from Paper III is revisited by using state and output feedback when the internal dynamics are unknown. Closed-loop stability is demonstrated for all the controller designs developed in this dissertation by using Lyapunov analysis --Abstract, page iv
    • …
    corecore