4 research outputs found

    Accurate computation of the Moore-Penrose inverse of strictly totally positive matrices

    Get PDF
    The computation of the Moore-Penrose inverse of structured strictly totally positive matrices is addressed. Since these matrices are usually very ill-conditioned, standard algorithms fail to provide accurate results. An algorithm based on the factorization and which takes advantage of the special structure and the totally positive character of these matrices is presented. The first stage of the algorithm consists of the accurate computation of the bidiagonal decomposition of the matrix. Numerical experiments illustrating the good behavior of our approach are included.Numerical experiments illustrating the good behavior of our approach are included

    Computation of Moore-Penrose generalized inverses of matrices with meromorphic function entries

    Get PDF
    J.R. Sendra is member of the Research Group ASYNACS (Ref.CT-CE2019/683)In this paper, given a field with an involutory automorphism, we introduce the notion of Moore-Penrose field by requiring that all matrices over the field have Moore-Penrose inverse. We prove that only characteristic zero fields can be Moore-Penrose, and that the field of rational functions over a Moore-Penrose field is also Moore-Penrose. In addition, for a matrix with rational functions entries with coefficients in a field K, we find sufficient conditions for the elements in K to ensure that the specialization of the Moore-Penrose inverse is the Moore-Penrose inverse of the specialization of the matrix. As a consequence, we provide a symbolic algorithm that, given a matrix whose entries are rational expression over C of finitely many meromeorphic functions being invariant by the involutory automorphism, computes its Moore-Penrose inverve by replacing the functions by new variables, and hence reducing the problem to the case of matrices with complex rational function entries.Ministerio de Economía y CompetitividadEuropean Regional Development Fun

    Recurrent neural networks for solving matrix algebra problems

    Get PDF
    The aim of this dissertation is the application of recurrent neural networks (RNNs) to solving some problems from a matrix algebra with particular reference to the computations of the generalized inverses as well as solving the matrix equations of constant (timeinvariant) matrices. We examine the ability to exploit the correlation between the dynamic state equations of recurrent neural networks for computing generalized inverses and integral representations of these generalized inverses. Recurrent neural networks are composed of independent parts (sub-networks). These sub-networks can work simultaneously, so parallel and distributed processing can be accomplished. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. We investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing the generalized inverses on the basis of the discovered analogy is defined. The Matlab Simulink implementation of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. We present the Matlab Simulink model of a hybrid recursive neural implicit dynamics and give a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    Distance measures and whitening procedures for high dimensional data

    Get PDF
    The need to effectively analyse high dimensional data is increasingly crucial to many fields as data collection and storage capabilities continue to grow. Working with high dimensional data is fraught with difficulties, making many data analysis methods inadvisable, unstable or entirely unavailable. The Mahalanobis distance and data whitening are two methods that are integral to multivariate data analysis. These methods are reliant on the inverse of the covariance matrix, which is often non-existent or unstable in high dimensions. The methods that are currently used to circumvent singularity in the covariance matrix often impose structural assumptions on the data, which are not always appropriate or known. In this thesis, three novel methods are proposed. Two of these methods are distance measures which measure the proximity of a point x to a set of points X. The simplicial distances find the average volume of all k-dimensional simplices between x and vertices of X. The minimal-variance distances aim to minimize the variance of the distances produced, while adhering to a constraint ensuring similar behaviour to the Mahalanobis distance. Finally, the minimal-variance whitening method is detailed. This is a method of data whitening, and is constructed by minimizing the total variation of the transformed data subject to a constraint. All of these novel methods are shown to behave similarly to the Mahalanobis distances and data whitening methods that are used for full-rank data. Furthermore, unlike the methods that rely on the inverse covariance matrix, these new methods are well-defined for degenerate data and do not impose structural assumptions. This thesis explores the aims, constructions and limitations of these new methods, and offers many empirical examples and comparisons of their performances when used with high dimensional data
    corecore