132 research outputs found

    Some Results on the Symmetric Representation of the Generalized Drazin Inverse in a Banach Algebra

    Full text link
    [EN] Based on the conditions ab(2) = 0 and b pi(ab) is an element of A(d), we derive that (ab)(n), (ba)(n), and ab + ba are all generalized Drazin invertible in a Banach algebra A, where n is an element of N and a and b are elements of A. By using these results, some results on the symmetry representations for the generalized Drazin inverse of ab + ba are given. We also consider that additive properties for the generalized Drazin inverse of the sum a + b.This work was supported by the National Natural Science Foundation of China (grant number: 11361009, 61772006,11561015), the Special Fund for Science and Technological Bases and Talents of Guangxi (grant number: 2016AD05050, 2018AD19051), the Special Fund for Bagui Scholars of Guangxi (grant number: 2016A17), the High level innovation teams and distinguished scholars in Guangxi Universities (grant number: GUIJIAOREN201642HAO), the Natural Science Foundation of Guangxi (grant number: 2017GXNSFBA198053, 2018JJD110003), and the open fund of Guangxi Key laboratory of hybrid computation and IC design analysis (grant number: HCIC201607).Qin, Y.; Liu, X.; Benítez López, J. (2019). Some Results on the Symmetric Representation of the Generalized Drazin Inverse in a Banach Algebra. Symmetry (Basel). 11(1):1-9. https://doi.org/10.3390/sym11010105S19111González, N. C. (2005). Additive perturbation results for the Drazin inverse. Linear Algebra and its Applications, 397, 279-297. doi:10.1016/j.laa.2004.11.001Zhang, X., & Chen, G. (2006). The computation of Drazin inverse and its application in Markov chains. Applied Mathematics and Computation, 183(1), 292-300. doi:10.1016/j.amc.2006.05.076Castro-González, N., Dopazo, E., & Martínez-Serrano, M. F. (2009). On the Drazin inverse of the sum of two operators and its application to operator matrices. Journal of Mathematical Analysis and Applications, 350(1), 207-215. doi:10.1016/j.jmaa.2008.09.035Qiao, S., Wang, X.-Z., & Wei, Y. (2018). Two finite-time convergent Zhang neural network models for time-varying complex matrix Drazin inverse. Linear Algebra and its Applications, 542, 101-117. doi:10.1016/j.laa.2017.03.014Stanimirovic, P. S., Zivkovic, I. S., & Wei, Y. (2015). Recurrent Neural Network for Computing the Drazin Inverse. IEEE Transactions on Neural Networks and Learning Systems, 26(11), 2830-2843. doi:10.1109/tnnls.2015.2397551Koliha, J. J. (1996). A generalized Drazin inverse. Glasgow Mathematical Journal, 38(3), 367-381. doi:10.1017/s0017089500031803Hartwig, R. E., Wang, G., & Wei, Y. (2001). Some additive results on Drazin inverse. Linear Algebra and its Applications, 322(1-3), 207-217. doi:10.1016/s0024-3795(00)00257-3Djordjević, D. S., & Wei, Y. (2002). Additive results for the generalized Drazin inverse. Journal of the Australian Mathematical Society, 73(1), 115-126. doi:10.1017/s1446788700008508Liu, X., Xu, L., & Yu, Y. (2010). The representations of the Drazin inverse of differences of two matrices. Applied Mathematics and Computation, 216(12), 3652-3661. doi:10.1016/j.amc.2010.05.016Yang, H., & Liu, X. (2011). The Drazin inverse of the sum of two matrices and its applications. Journal of Computational and Applied Mathematics, 235(5), 1412-1417. doi:10.1016/j.cam.2010.08.027Harte, R. (1992). On generalized inverses in C*-algebras. Studia Mathematica, 103(1), 71-77. doi:10.4064/sm-103-1-71-77Djordjevic, D. S., & Stanimirovic, P. S. (2001). On the Generalized Drazin Inverse and Generalized Resolvent. Czechoslovak Mathematical Journal, 51(3), 617-634. doi:10.1023/a:1013792207970Cvetković-Ilić, D. S., Djordjević, D. S., & Wei, Y. (2006). Additive results for the generalized Drazin inverse in a Banach algebra. Linear Algebra and its Applications, 418(1), 53-61. doi:10.1016/j.laa.2006.01.015Liu, X., Qin, X., & Benítez, J. (2016). New additive results for the generalized Drazin inverse in a Banach algebra. Filomat, 30(8), 2289-2294. doi:10.2298/fil1608289lMosić, D., Zou, H., & Chen, J. (2017). The generalized Drazin inverse of the sum in a Banach algebra. Annals of Functional Analysis, 8(1), 90-105. doi:10.1215/20088752-3764461González, N. C., & Koliha, J. J. (2004). New additive results for the g-Drazin inverse. Proceedings of the Royal Society of Edinburgh: Section A Mathematics, 134(6), 1085-1097. doi:10.1017/s0308210500003632Mosić, D. (2014). A note on Cline’s formula for the generalized Drazin inverse. Linear and Multilinear Algebra, 63(6), 1106-1110. doi:10.1080/03081087.2014.92296

    Recurrent neural networks for solving matrix algebra problems

    Get PDF
    The aim of this dissertation is the application of recurrent neural networks (RNNs) to solving some problems from a matrix algebra with particular reference to the computations of the generalized inverses as well as solving the matrix equations of constant (timeinvariant) matrices. We examine the ability to exploit the correlation between the dynamic state equations of recurrent neural networks for computing generalized inverses and integral representations of these generalized inverses. Recurrent neural networks are composed of independent parts (sub-networks). These sub-networks can work simultaneously, so parallel and distributed processing can be accomplished. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. We investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing the generalized inverses on the basis of the discovered analogy is defined. The Matlab Simulink implementation of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. We present the Matlab Simulink model of a hybrid recursive neural implicit dynamics and give a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    MATLAB SIMULATION OF THE HYBRID OF RECURSIVE NEURAL DYNAMICS FOR ONLINE MATRIX INVERSION

    Get PDF
    A novel kind of a hybrid recursive neural implicit dynamics for real-time matrix inversion has been recently proposed and investigated. Our goal is to compare the hybrid recursive neural implicit dynamics on the one hand, and conventional explicit neural dynamics on the other hand. Simulation results show that the hybrid model can coincide better with systems in practice and has higher abilities in representing dynamic systems. More importantly, hybrid model can achieve superior convergence performance in comparison with the existing dynamic systems, specifically recently-proposed Zhang dynamics. This paper presents the Simulink model of a hybrid recursive neural implicit dynamics and gives a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    A Noise-Tolerant Zeroing Neural Network for Time-Dependent Complex Matrix Inversion Under Various Kinds of Noises

    Get PDF
    Complex-valued time-dependent matrix inversion (TDMI) is extensively exploited in practical industrial and engineering fields. Many current neural models are presented to find the inverse of a matrix in an ideal noise-free environment. However, the outer interferences are normally believed to be ubiquitous and avoidable in practice. If these neural models are applied to complex-valued TDMI in a noise environment, they need to take a lot of precious time to deal with outer noise disturbances in advance. Thus, a noise-suppression model is urgent to be proposed to address this problem. In this article, a complex-valued noise-tolerant zeroing neural network (CVNTZNN) on the basis of an integral-type design formula is established and investigated for finding complex-valued TDMI under a wide variety of noises. Furthermore, both convergence and robustness of the CVNTZNN model are carefully analyzed and rigorously proved. For comparison and verification purposes, the existing zeroing neural network (ZNN) and gradient neural network (GNN) have been presented to address the same problem under the same conditions. Numerical simulation consequences demonstrate the effectiveness and excellence of the proposed CVNTZNN model for complex-valued TDMI under various kinds of noises, by comparing the existing ZNN and GNN models

    Essays on the economics of networks

    Get PDF
    Networks (collections of nodes or vertices and graphs capturing their linkages) are a common object of study across a range of fields includ- ing economics, statistics and computer science. Network analysis is often based around capturing the overall structure of the network by some reduced set of parameters. Canonically, this has focused on the notion of centrality. There are many measures of centrality, mostly based around statistical analysis of the linkages between nodes on the network. However, another common approach has been through the use of eigenfunction analysis of the centrality matrix. My the- sis focuses on eigencentrality as a property, paying particular focus to equilibrium behaviour when the network structure is fixed. This occurs when nodes are either passive, such as for web-searches or queueing models or when they represent active optimizing agents in network games. The major contribution of my thesis is in the applica- tion of relatively recent innovations in matrix derivatives to centrality measurements and equilibria within games that are function of those measurements. I present a series of new results on the stability of eigencentrality measures and provide some examples of applications to a number of real world examples

    New Noise-Tolerant ZNN Models With Predefined-Time Convergence for Time-Variant Sylvester Equation Solving

    Get PDF
    Sylvester equation is often applied to various fields, such as mathematics and control systems due to its importance. Zeroing neural network (ZNN), as a systematic design method for time-variant problems, has been proved to be effective on solving Sylvester equation in the ideal conditions. In this paper, in order to realize the predefined-time convergence of the ZNN model and modify its robustness, two new noise-tolerant ZNNs (NNTZNNs) are established by devising two novelly constructed nonlinear activation functions (AFs) to find the accurate solution of the time-variant Sylvester equation in the presence of various noises. Unlike the original ZNN models activated by known AFs, the proposed two NNTZNN models are activated by two novel AFs, therefore, possessing the excellent predefined-time convergence and strong robustness even in the presence of various noises. Besides, the detailed theoretical analyses of the predefined-time convergence and robustness ability for the NNTZNN models are given by considering different kinds of noises. Simulation comparative results further verify the excellent performance of the proposed NNTZNN models, when applied to online solution of the time-variant Sylvester equation

    Discrete-time zeroing neural network for solving time-varying Sylvester-transpose matrix inequation via exp-aided conversion

    Get PDF
    Time-varying linear matrix equations and inequations have been widely studied in recent years. Time-varying Sylvester-transpose matrix inequation, which is an important variant, has not been fully investigated. Solving the time-varying problem in a constructive manner remains a challenge. This study considers an exp-aided conversion from time-varying linear matrix inequations to equations to solve the intractable problem. On the basis of zeroing neural network (ZNN) method, a continuous-time zeroing neural network (CTZNN) model is derived with the help of Kronecker product and vectorization technique. The convergence property of the model is analyzed. Two discrete-time ZNN models are obtained with the theoretical analyses of truncation error by using two Zhang et al.’s discretization (ZeaD) formulas with different precision to discretize the CTZNN model. The comparative numerical experiments are conducted for two discrete-time ZNN models, and the corresponding numerical results substantiate the convergence and effectiveness of two ZNN discrete-time models
    corecore