54 research outputs found

    A Noise-Tolerant Zeroing Neural Network for Time-Dependent Complex Matrix Inversion Under Various Kinds of Noises

    Get PDF
    Complex-valued time-dependent matrix inversion (TDMI) is extensively exploited in practical industrial and engineering fields. Many current neural models are presented to find the inverse of a matrix in an ideal noise-free environment. However, the outer interferences are normally believed to be ubiquitous and avoidable in practice. If these neural models are applied to complex-valued TDMI in a noise environment, they need to take a lot of precious time to deal with outer noise disturbances in advance. Thus, a noise-suppression model is urgent to be proposed to address this problem. In this article, a complex-valued noise-tolerant zeroing neural network (CVNTZNN) on the basis of an integral-type design formula is established and investigated for finding complex-valued TDMI under a wide variety of noises. Furthermore, both convergence and robustness of the CVNTZNN model are carefully analyzed and rigorously proved. For comparison and verification purposes, the existing zeroing neural network (ZNN) and gradient neural network (GNN) have been presented to address the same problem under the same conditions. Numerical simulation consequences demonstrate the effectiveness and excellence of the proposed CVNTZNN model for complex-valued TDMI under various kinds of noises, by comparing the existing ZNN and GNN models

    Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization

    Get PDF
    Zeroing neural network (ZNN) is a powerful tool to address the mathematical and optimization problems broadly arisen in the science and engineering areas. The convergence and robustness are always co-pursued in ZNN. However, there exists no related work on the ZNN for time-dependent nonlinear minimization that achieves simultaneously limited-time convergence and inherently noise suppression. In this article, for the purpose of satisfying such two requirements, a limited-time robust neural network (LTRNN) is devised and presented to solve time-dependent nonlinear minimization under various external disturbances. Different from the previous ZNN model for this problem either with limited-time convergence or with noise suppression, the proposed LTRNN model simultaneously possesses such two characteristics. Besides, rigorous theoretical analyses are given to prove the superior performance of the LTRNN model when adopted to solve time-dependent nonlinear minimization under external disturbances. Comparative results also substantiate the effectiveness and advantages of LTRNN via solving a time-dependent nonlinear minimization problem

    Complex Noise-Resistant Zeroing Neural Network for Computing Complex Time-Dependent Lyapunov Equation

    Get PDF
    Complex time-dependent Lyapunov equation (CTDLE), as an important means of stability analysis of control systems, has been extensively employed in mathematics and engineering application fields. Recursive neural networks (RNNs) have been reported as an effective method for solving CTDLE. In the previous work, zeroing neural networks (ZNNs) have been established to find the accurate solution of time-dependent Lyapunov equation (TDLE) in the noise-free conditions. However, noises are inevitable in the actual implementation process. In order to suppress the interference of various noises in practical applications, in this paper, a complex noise-resistant ZNN (CNRZNN) model is proposed and employed for the CTDLE solution. Additionally, the convergence and robustness of the CNRZNN model are analyzed and proved theoretically. For verification and comparison, three experiments and the existing noise-tolerant ZNN (NTZNN) model are introduced to investigate the effectiveness, convergence and robustness of the CNRZNN model. Compared with the NTZNN model, the CNRZNN model has more generality and stronger robustness. Specifically, the NTZNN model is a special form of the CNRZNN model, and the residual error of CNRZNN can converge rapidly and stably to order 10−5 when solving CTDLE under complex linear noises, which is much lower than order 10−1 of the NTZNN model. Analogously, under complex quadratic noises, the residual error of the CNRZNN model can converge to 2∥A∥F/ζ3 quickly and stably, while the residual error of the NTZNN model is divergent

    A Novel Zeroing Neural Network for Solving Time-Varying Quadratic Matrix Equations against Linear Noises

    Get PDF
    The solving of quadratic matrix equations is a fundamental issue which essentially exists in the optimal control domain. However, noises exerted on the coefficients of quadratic matrix equations may affect the accuracy of the solutions. In order to solve the time-varying quadratic matrix equation problem under linear noise, a new error-processing design formula is proposed, and a resultant novel zeroing neural network model is developed. The new design formula incorporates a second-order error-processing manner, and the double-integration-enhanced zeroing neural network (DIEZNN) model is further proposed for solving time-varying quadratic matrix equations subject to linear noises. Compared with the original zeroing neural network (OZNN) model, finite-time zeroing neural network (FTZNN) model and integration-enhanced zeroing neural network (IEZNN) model, the DIEZNN model shows the superiority of its solution under linear noise; that is, when solving the problem of a time-varying quadratic matrix equation in the environment of linear noise, the residual error of the existing model will maintain a large level due to the influence of linear noise, which will eventually lead to the solution’s failure. The newly proposed DIEZNN model can guarantee a normal solution to the time-varying quadratic matrix equation task no matter how much linear noise there is. In addition, the theoretical analysis proves that the neural state of the DIEZNN model can converge to the theoretical solution even under linear noise. The computer simulation results further substantiate the superiority of the DIEZNN model in solving time-varying quadratic matrix equations under linear noise

    The Eight Epochs of Math as Regards Past and Future Matrix Computations

    Get PDF
    This survey paper gives a personal assessment of epoch-making advances in matrix computations, from antiquity and with an eye toward tomorrow. It traces the development of number systems and elementary algebra and the uses of Gaussian elimination methods from around 2000 BC on to current real-time neural network computations to solve time-varying matrix equations. The paper includes relevant advances from China from the third century AD on and from India and Persia in the ninth and later centuries. Then it discusses the conceptual genesis of vectors and matrices in Central Europe and in Japan in the fourteenth through seventeenth centuries AD, followed by the 150 year cul-de-sac of polynomial root finder research for matrix eigenvalues, as well as the superbly useful matrix iterative methods and Francis’ matrix eigenvalue algorithm from the last century. Finally, we explain the recent use of initial value problem solvers and high-order 1-step ahead discretization formulas to master time-varying linear and nonlinear matrix equations via Zhang neural networks. This paper ends with a short outlook upon new hardware schemes with multilevel processors that go beyond the 0–1 base 2 framework which all of our past and current electronic computers have been using

    Design and analysis of recurrent neural network models with non‐linear activation functions for solving time‐varying quadratic programming problems

    Get PDF
    A special recurrent neural network (RNN), that is the zeroing neural network (ZNN), is adopted to find solutions to time‐varying quadratic programming (TVQP) problems with equality and inequality constraints. However, there are some weaknesses in activation functions of traditional ZNN models, including convex restriction and redundant formulation. With the aid of different activation functions, modified ZNN models are obtained to overcome the drawbacks for solving TVQP problems. Theoretical and experimental research indicate that the proposed models are better and more effective at solving such TVQP problems

    Decentralized Constrained Optimization, Double Averaging and Gradient Projection

    Full text link
    We consider a generic decentralized constrained optimization problem over static, directed communication networks, where each agent has exclusive access to only one convex, differentiable, local objective term and one convex constraint set. For this setup, we propose a novel decentralized algorithm, called DAGP (Double Averaging and Gradient Projection), based on local gradients, projection onto local constraints, and local averaging. We achieve global optimality through a novel distributed tracking technique we call distributed null projection. Further, we show that DAGP can also be used to solve unconstrained problems with non-differentiable objective terms, by employing the so-called epigraph projection operators (EPOs). In this regard, we introduce a new fast algorithm for evaluating EPOs. We study the convergence of DAGP and establish O(1/K)\mathcal{O}(1/\sqrt{K}) convergence in terms of feasibility, consensus, and optimality. For this reason, we forego the difficulties of selecting Lyapunov functions by proposing a new methodology of convergence analysis in optimization problems, which we refer to as aggregate lower-bounding. To demonstrate the generality of this method, we also provide an alternative convergence proof for the gradient descent algorithm for smooth functions. Finally, we present numerical results demonstrating the effectiveness of our proposed method in both constrained and unconstrained problems
    corecore