1,689 research outputs found
Computing generalized inverses using LU factorization of matrix product
An algorithm for computing {2, 3}, {2, 4}, {1, 2, 3}, {1, 2, 4} -inverses and
the Moore-Penrose inverse of a given rational matrix A is established. Classes
A(2, 3)s and A(2, 4)s are characterized in terms of matrix products (R*A)+R*
and T*(AT*)+, where R and T are rational matrices with appropriate dimensions
and corresponding rank. The proposed algorithm is based on these general
representations and the Cholesky factorization of symmetric positive matrices.
The algorithm is implemented in programming languages MATHEMATICA and DELPHI,
and illustrated via examples. Numerical results of the algorithm, corresponding
to the Moore-Penrose inverse, are compared with corresponding results obtained
by several known methods for computing the Moore-Penrose inverse
A Computational Framework for the Mixing Times in the QBD Processes with Infinitely-Many Levels
In this paper, we develop some matrix Poisson's equations satisfied by the
mean and variance of the mixing time in an irreducible positive-recurrent
discrete-time Markov chain with infinitely-many levels, and provide a
computational framework for the solution to the matrix Poisson's equations by
means of the UL-type of -factorization as well as the generalized inverses.
In an important special case: the level-dependent QBD processes, we provide a
detailed computation for the mean and variance of the mixing time. Based on
this, we give new highlight on computation of the mixing time in the
block-structured Markov chains with infinitely-many levels through the
matrix-analytic method
Sparse approximate inverse preconditioners on high performance GPU platforms
Simulation with models based on partial differential equations often requires the solution of (sequences of) large and sparse algebraic linear systems. In multidimensional domains, preconditioned Krylov iterative solvers are often appropriate for these duties. Therefore, the search for efficient preconditioners for Krylov subspace methods is a crucial theme. Recent developments, especially in computing hardware, have renewed the interest in approximate inverse preconditioners in factorized form, because their application during the solution process can be more efficient. We present here some experiences focused on the approximate inverse preconditioners proposed by Benzi and Tůma from 1996 and the sparsification and inversion proposed by van Duin in 1999. Computational costs, reorderings and implementation issues are considered both on conventional and innovative computing architectures like Graphics Programming Units (GPUs)
Minimizing Communication for Eigenproblems and the Singular Value Decomposition
Algorithms have two costs: arithmetic and communication. The latter
represents the cost of moving data, either between levels of a memory
hierarchy, or between processors over a network. Communication often dominates
arithmetic and represents a rapidly increasing proportion of the total cost, so
we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds
were presented on the amount of communication required for essentially all
-like algorithms for linear algebra, including eigenvalue problems and
the SVD. Conventional algorithms, including those currently implemented in
(Sca)LAPACK, perform asymptotically more communication than these lower bounds
require. In this paper we present parallel and sequential eigenvalue algorithms
(for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms
that do attain these lower bounds, and analyze their convergence and
communication costs.Comment: 43 pages, 11 figure
Fast Algorithms for Displacement and Low-Rank Structured Matrices
This tutorial provides an introduction to the development of fast matrix
algorithms based on the notions of displacement and various low-rank
structures
Application-tailored Linear Algebra Algorithms: A search-based Approach
In this paper, we tackle the problem of automatically generating algorithms
for linear algebra operations by taking advantage of problem-specific
knowledge. In most situations, users possess much more information about the
problem at hand than what current libraries and computing environments accept;
evidence shows that if properly exploited, such information leads to
uncommon/unexpected speedups. We introduce a knowledge-aware linear algebra
compiler that allows users to input matrix equations together with properties
about the operands and the problem itself; for instance, they can specify that
the equation is part of a sequence, and how successive instances are related to
one another. The compiler exploits all this information to guide the generation
of algorithms, to limit the size of the search space, and to avoid redundant
computations. We applied the compiler to equations arising as part of
sensitivity and genome studies; the algorithms produced exhibit, respectively,
100- and 1000-fold speedups
Computation of generalized inverses by using the LDL∗ decomposition
AbstractAn efficient algorithm, based on the LDL∗ factorization, for computing {1,2,3} and {1,2,4} inverses and the Moore–Penrose inverse of a given rational matrix A, is developed. We consider matrix products A∗A and AA∗ and corresponding LDL∗ factorizations in order to compute the generalized inverse of A. By considering the matrix products (R∗A)†R∗ and T∗(AT∗)†, where R and T are arbitrary rational matrices with appropriate dimensions and ranks, we characterize classes A{1,2,3} and A{1,2,4}. Some evaluation times for our algorithm are compared with corresponding times for several known algorithms for computing the Moore–Penrose inverse
- …