89,630 research outputs found
Simultaneous identification, tracking control and disturbance rejection of uncertain nonlinear dynamics systems: A unified neural approach
Previous works of traditional zeroing neural networks (or termed Zhang neural networks, ZNN) show great success for solving specific time-variant problems of known systems in an ideal environment. However, it is still a challenging issue for the ZNN to effectively solve time-variant problems for uncertain systems without the prior knowledge. Simultaneously, the involvement of external disturbances in the neural network model makes it even hard for time-variant problem solving due to the intensively computational burden and low accuracy. In this paper, a unified neural approach of simultaneous identification, tracking control and disturbance rejection in the framework of the ZNN is proposed to address the time-variant tracking control of uncertain nonlinear dynamics systems (UNDS). The neural network model derived by the proposed approach captures hidden relations between inputs and outputs of the UNDS. The proposed model shows outstanding tracking performance even under the influences of uncertainties and disturbances. Then, the continuous-time model is discretized via Euler forward formula (EFF). The corresponding discrete algorithm and block diagram are also presented for the convenience of implementation. Theoretical analyses on the convergence property and discretization accuracy are presented to verify the performance of the neural network model. Finally, numerical studies, robot applications, performance comparisons and tests demonstrate the effectiveness and advantages of the proposed neural network model for the time-variant tracking control of UNDS
Recommended from our members
Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays
Copyright [2006] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this letter, the global asymptotic stability analysis problem is considered for a class of stochastic Cohen-Grossberg neural networks with mixed time delays, which consist of both the discrete and distributed time delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, a linear matrix inequality (LMI) approach is developed to derive several sufficient conditions guaranteeing the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the addressed stochastic Cohen-Grossberg neural networks with mixed delays are globally asymptotically stable in the mean square if two LMIs are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also pointed out that the main results comprise some existing results as special cases. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria
Deep Limits of Residual Neural Networks
Neural networks have been very successful in many applications; we often,
however, lack a theoretical understanding of what the neural networks are
actually learning. This problem emerges when trying to generalise to new data
sets. The contribution of this paper is to show that, for the residual neural
network model, the deep layer limit coincides with a parameter estimation
problem for a nonlinear ordinary differential equation. In particular, whilst
it is known that the residual neural network model is a discretisation of an
ordinary differential equation, we show convergence in a variational sense.
This implies that optimal parameters converge in the deep layer limit. This is
a stronger statement than saying for a fixed parameter the residual neural
network model converges (the latter does not in general imply the former). Our
variational analysis provides a discrete-to-continuum -convergence
result for the objective function of the residual neural network training step
to a variational problem constrained by a system of ordinary differential
equations; this rigorously connects the discrete setting to a continuum
problem
- …