89,630 research outputs found

    Simultaneous identification, tracking control and disturbance rejection of uncertain nonlinear dynamics systems: A unified neural approach

    Get PDF
    Previous works of traditional zeroing neural networks (or termed Zhang neural networks, ZNN) show great success for solving specific time-variant problems of known systems in an ideal environment. However, it is still a challenging issue for the ZNN to effectively solve time-variant problems for uncertain systems without the prior knowledge. Simultaneously, the involvement of external disturbances in the neural network model makes it even hard for time-variant problem solving due to the intensively computational burden and low accuracy. In this paper, a unified neural approach of simultaneous identification, tracking control and disturbance rejection in the framework of the ZNN is proposed to address the time-variant tracking control of uncertain nonlinear dynamics systems (UNDS). The neural network model derived by the proposed approach captures hidden relations between inputs and outputs of the UNDS. The proposed model shows outstanding tracking performance even under the influences of uncertainties and disturbances. Then, the continuous-time model is discretized via Euler forward formula (EFF). The corresponding discrete algorithm and block diagram are also presented for the convenience of implementation. Theoretical analyses on the convergence property and discretization accuracy are presented to verify the performance of the neural network model. Finally, numerical studies, robot applications, performance comparisons and tests demonstrate the effectiveness and advantages of the proposed neural network model for the time-variant tracking control of UNDS

    Deep Limits of Residual Neural Networks

    Full text link
    Neural networks have been very successful in many applications; we often, however, lack a theoretical understanding of what the neural networks are actually learning. This problem emerges when trying to generalise to new data sets. The contribution of this paper is to show that, for the residual neural network model, the deep layer limit coincides with a parameter estimation problem for a nonlinear ordinary differential equation. In particular, whilst it is known that the residual neural network model is a discretisation of an ordinary differential equation, we show convergence in a variational sense. This implies that optimal parameters converge in the deep layer limit. This is a stronger statement than saying for a fixed parameter the residual neural network model converges (the latter does not in general imply the former). Our variational analysis provides a discrete-to-continuum Γ\Gamma-convergence result for the objective function of the residual neural network training step to a variational problem constrained by a system of ordinary differential equations; this rigorously connects the discrete setting to a continuum problem
    • …
    corecore