6,308 research outputs found

    A general class of arbitrary order iterative methods for computing generalized inverses

    Full text link
    [EN] A family of iterative schemes for approximating the inverse and generalized inverse of a complex matrix is designed, having arbitrary order of convergence p. For each p, a class of iterative schemes appears, for which we analyze those elements able to converge with very far initial estimations. This class generalizes many known iterative methods which are obtained for particular values of the parameters. The order of convergence is stated in each case, depending on the first non-zero parameter. For different examples, the accessibility of some schemes, that is, the set of initial estimations leading to convergence, is analyzed in order to select those with wider sets. This wideness is related with the value of the first non-zero value of the parameters defining the method. Later on, some numerical examples (academic and also from signal processing) are provided to confirm the theoretical results and to show the feasibility and effectiveness of the new methods. (C) 2021 The Authors. Published by Elsevier Inc.This research was supported in part by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE) and in part by VIE from Instituto Tecnologico de Costa Rica (Research #1440037)Cordero Barbero, A.; Soto-Quiros, P.; Torregrosa Sánchez, JR. (2021). A general class of arbitrary order iterative methods for computing generalized inverses. Applied Mathematics and Computation. 409:1-18. https://doi.org/10.1016/j.amc.2021.126381S11840

    Recurrent neural networks for solving matrix algebra problems

    Get PDF
    The aim of this dissertation is the application of recurrent neural networks (RNNs) to solving some problems from a matrix algebra with particular reference to the computations of the generalized inverses as well as solving the matrix equations of constant (timeinvariant) matrices. We examine the ability to exploit the correlation between the dynamic state equations of recurrent neural networks for computing generalized inverses and integral representations of these generalized inverses. Recurrent neural networks are composed of independent parts (sub-networks). These sub-networks can work simultaneously, so parallel and distributed processing can be accomplished. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. We investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing the generalized inverses on the basis of the discovered analogy is defined. The Matlab Simulink implementation of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. We present the Matlab Simulink model of a hybrid recursive neural implicit dynamics and give a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    Generalized inverses estimations by means of iterative methods with memory

    Full text link
    [EN] A secant-type method is designed for approximating the inverse and some generalized inverses of a complex matrix A. For a nonsingular matrix, the proposed method gives us an approximation of the inverse and, when the matrix is singular, an approximation of the Moore-Penrose inverse and Drazin inverse are obtained. The convergence and the order of convergence is presented in each case. Some numerical tests allowed us to confirm the theoretical results and to compare the performance of our method with other known ones. With these results, the iterative methods with memory appear for the first time for estimating the solution of a nonlinear matrix equations.This research was supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE), Generalitat Valenciana PROMETEO/2016/089, and FONDOCYT 029-2018 Republica Dominicana.Artidiello, S.; Cordero Barbero, A.; Torregrosa Sánchez, JR.; Vassileva, MP. (2020). Generalized inverses estimations by means of iterative methods with memory. Mathematics. 8(1):1-13. https://doi.org/10.3390/math8010002S11381Li, X., & Wei, Y. (2004). Iterative methods for the Drazin inverse of a matrix with a complex spectrum. Applied Mathematics and Computation, 147(3), 855-862. doi:10.1016/s0096-3003(02)00817-2Li, H.-B., Huang, T.-Z., Zhang, Y., Liu, X.-P., & Gu, T.-X. (2011). Chebyshev-type methods and preconditioning techniques. Applied Mathematics and Computation, 218(2), 260-270. doi:10.1016/j.amc.2011.05.036Soleymani, F., & Stanimirović, P. S. (2013). A Higher Order Iterative Method for Computing the Drazin Inverse. The Scientific World Journal, 2013, 1-11. doi:10.1155/2013/708647Weiguo, L., Juan, L., & Tiantian, Q. (2013). A family of iterative methods for computing Moore–Penrose inverse of a matrix. Linear Algebra and its Applications, 438(1), 47-56. doi:10.1016/j.laa.2012.08.004Soleymani, F., Salmani, H., & Rasouli, M. (2014). Finding the Moore–Penrose inverse by a new matrix iteration. Journal of Applied Mathematics and Computing, 47(1-2), 33-48. doi:10.1007/s12190-014-0759-4Gu, X.-M., Huang, T.-Z., Ji, C.-C., Carpentieri, B., & Alikhanov, A. A. (2017). Fast Iterative Method with a Second-Order Implicit Difference Scheme for Time-Space Fractional Convection–Diffusion Equation. Journal of Scientific Computing, 72(3), 957-985. doi:10.1007/s10915-017-0388-9Li, M., Gu, X.-M., Huang, C., Fei, M., & Zhang, G. (2018). A fast linearized conservative finite element method for the strongly coupled nonlinear fractional Schrödinger equations. Journal of Computational Physics, 358, 256-282. doi:10.1016/j.jcp.2017.12.044Schulz, G. (1933). Iterative Berechung der reziproken Matrix. ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik, 13(1), 57-59. doi:10.1002/zamm.19330130111Li, W., & Li, Z. (2010). A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix. Applied Mathematics and Computation, 215(9), 3433-3442. doi:10.1016/j.amc.2009.10.038Chen, H., & Wang, Y. (2011). A Family of higher-order convergent iterative methods for computing the Moore–Penrose inverse. Applied Mathematics and Computation, 218(8), 4012-4016. doi:10.1016/j.amc.2011.05.066Monsalve, M., & Raydan, M. (2011). A Secant Method for Nonlinear Matrix Problems. Numerical Linear Algebra in Signals, Systems and Control, 387-402. doi:10.1007/978-94-007-0602-6_18Jay, L. O. (2001). Bit Numerical Mathematics, 41(2), 422-429. doi:10.1023/a:1021902825707Cordero, A., & Torregrosa, J. R. (2007). Variants of Newton’s Method using fifth-order quadrature formulas. Applied Mathematics and Computation, 190(1), 686-698. doi:10.1016/j.amc.2007.01.06

    Computing generalized inverses using LU factorization of matrix product

    Full text link
    An algorithm for computing {2, 3}, {2, 4}, {1, 2, 3}, {1, 2, 4} -inverses and the Moore-Penrose inverse of a given rational matrix A is established. Classes A(2, 3)s and A(2, 4)s are characterized in terms of matrix products (R*A)+R* and T*(AT*)+, where R and T are rational matrices with appropriate dimensions and corresponding rank. The proposed algorithm is based on these general representations and the Cholesky factorization of symmetric positive matrices. The algorithm is implemented in programming languages MATHEMATICA and DELPHI, and illustrated via examples. Numerical results of the algorithm, corresponding to the Moore-Penrose inverse, are compared with corresponding results obtained by several known methods for computing the Moore-Penrose inverse
    corecore