22 research outputs found

    New Noise-Tolerant ZNN Models With Predefined-Time Convergence for Time-Variant Sylvester Equation Solving

    Get PDF
    Sylvester equation is often applied to various fields, such as mathematics and control systems due to its importance. Zeroing neural network (ZNN), as a systematic design method for time-variant problems, has been proved to be effective on solving Sylvester equation in the ideal conditions. In this paper, in order to realize the predefined-time convergence of the ZNN model and modify its robustness, two new noise-tolerant ZNNs (NNTZNNs) are established by devising two novelly constructed nonlinear activation functions (AFs) to find the accurate solution of the time-variant Sylvester equation in the presence of various noises. Unlike the original ZNN models activated by known AFs, the proposed two NNTZNN models are activated by two novel AFs, therefore, possessing the excellent predefined-time convergence and strong robustness even in the presence of various noises. Besides, the detailed theoretical analyses of the predefined-time convergence and robustness ability for the NNTZNN models are given by considering different kinds of noises. Simulation comparative results further verify the excellent performance of the proposed NNTZNN models, when applied to online solution of the time-variant Sylvester equation

    Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization

    Get PDF
    Zeroing neural network (ZNN) is a powerful tool to address the mathematical and optimization problems broadly arisen in the science and engineering areas. The convergence and robustness are always co-pursued in ZNN. However, there exists no related work on the ZNN for time-dependent nonlinear minimization that achieves simultaneously limited-time convergence and inherently noise suppression. In this article, for the purpose of satisfying such two requirements, a limited-time robust neural network (LTRNN) is devised and presented to solve time-dependent nonlinear minimization under various external disturbances. Different from the previous ZNN model for this problem either with limited-time convergence or with noise suppression, the proposed LTRNN model simultaneously possesses such two characteristics. Besides, rigorous theoretical analyses are given to prove the superior performance of the LTRNN model when adopted to solve time-dependent nonlinear minimization under external disturbances. Comparative results also substantiate the effectiveness and advantages of LTRNN via solving a time-dependent nonlinear minimization problem

    Design and analysis of three nonlinearly activated ZNN models for solving time-varying linear matrix inequalities in finite time

    Get PDF
    To obtain the superiority property of solving time-varying linear matrix inequalities (LMIs), three novel finite-time convergence zeroing neural network (FTCZNN) models are designed and analyzed in this paper. First, to make the Matlab toolbox calculation processing more conveniently, the matrix vectorization technique is used to transform matrix-valued FTCZNN models into vector-valued FTCZNN models. Then, considering the importance of nonlinear activation functions on the conventional zeroing neural network (ZNN), the sign-bi-power activation function (AF), the improved sign-bi-power AF and the tunable sign-bi-power AF are explored to establish the FTCZNN models. Theoretical analysis shows that the FTCZNN models not only can accelerate the convergence speed, but also can achieve finite-time convergence. Computer numerical results ulteriorly confirm the effectiveness and advantages of the FTCZNN models for finding the solution set of time-varying LMIs

    Recurrent neural networks for solving matrix algebra problems

    Get PDF
    The aim of this dissertation is the application of recurrent neural networks (RNNs) to solving some problems from a matrix algebra with particular reference to the computations of the generalized inverses as well as solving the matrix equations of constant (timeinvariant) matrices. We examine the ability to exploit the correlation between the dynamic state equations of recurrent neural networks for computing generalized inverses and integral representations of these generalized inverses. Recurrent neural networks are composed of independent parts (sub-networks). These sub-networks can work simultaneously, so parallel and distributed processing can be accomplished. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. We investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing the generalized inverses on the basis of the discovered analogy is defined. The Matlab Simulink implementation of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. We present the Matlab Simulink model of a hybrid recursive neural implicit dynamics and give a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    Neural Network Model-Based Control for Manipulator: An Autoencoder Perspective

    Get PDF
    Recently, neural network model-based control has received wide interests in kinematics control of manipulators. To enhance learning ability of neural network models, the autoencoder method is used as a powerful tool to achieve deep learning and has gained success in recent years. However, the performance of existing autoencoder approaches for manipulator control may be still largely dependent on the quality of data, and for extreme cases with noisy data it may even fail. How to incorporate the model knowledge into the autoencoder controller design with an aim to increase the robustness and reliability remains a challenging problem. In this work, a sparse autoencoder controller for kinematic control of manipulators with weights obtained directly from the robot model rather than training data is proposed for the first time. By encoding and decoding the control target though a new dynamic recurrent neural network architecture, the control input can be solved through a new sparse optimization formulation. In this work, input saturation, which holds for almost all practical systems but usually is ignored for analysis simplicity, is also considered in the controller construction. Theoretical analysis and extensive simulations demonstrate that the proposed sparse autoencoder controller with input saturation can make the end-effector of the manipulator system track the desired path efficiently. Further performance comparison and evaluation against the additive noise and parameter uncertainty substantiate robustness of the proposed sparse autoencoder manipulator controller
    corecore