1,664 research outputs found
Accurate and Efficient Expression Evaluation and Linear Algebra
We survey and unify recent results on the existence of accurate algorithms
for evaluating multivariate polynomials, and more generally for accurate
numerical linear algebra with structured matrices. By "accurate" we mean that
the computed answer has relative error less than 1, i.e., has some correct
leading digits. We also address efficiency, by which we mean algorithms that
run in polynomial time in the size of the input. Our results will depend
strongly on the model of arithmetic: Most of our results will use the so-called
Traditional Model (TM). We give a set of necessary and sufficient conditions to
decide whether a high accuracy algorithm exists in the TM, and describe
progress toward a decision procedure that will take any problem and provide
either a high accuracy algorithm or a proof that none exists. When no accurate
algorithm exists in the TM, it is natural to extend the set of available
accurate operations by a library of additional operations, such as , dot
products, or indeed any enumerable set which could then be used to build
further accurate algorithms. We show how our accurate algorithms and decision
procedure for finding them extend to this case. Finally, we address other
models of arithmetic, and the relationship between (im)possibility in the TM
and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures
The Anderson model of localization: a challenge for modern eigenvalue methods
We present a comparative study of the application of modern eigenvalue
algorithms to an eigenvalue problem arising in quantum physics, namely, the
computation of a few interior eigenvalues and their associated eigenvectors for
the large, sparse, real, symmetric, and indefinite matrices of the Anderson
model of localization. We compare the Lanczos algorithm in the 1987
implementation of Cullum and Willoughby with the implicitly restarted Arnoldi
method coupled with polynomial and several shift-and-invert convergence
accelerators as well as with a sparse hybrid tridiagonalization method. We
demonstrate that for our problem the Lanczos implementation is faster and more
memory efficient than the other approaches. This seemingly innocuous problem
presents a major challenge for all modern eigenvalue algorithms.Comment: 16 LaTeX pages with 3 figures include
Minimizing Communication for Eigenproblems and the Singular Value Decomposition
Algorithms have two costs: arithmetic and communication. The latter
represents the cost of moving data, either between levels of a memory
hierarchy, or between processors over a network. Communication often dominates
arithmetic and represents a rapidly increasing proportion of the total cost, so
we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds
were presented on the amount of communication required for essentially all
-like algorithms for linear algebra, including eigenvalue problems and
the SVD. Conventional algorithms, including those currently implemented in
(Sca)LAPACK, perform asymptotically more communication than these lower bounds
require. In this paper we present parallel and sequential eigenvalue algorithms
(for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms
that do attain these lower bounds, and analyze their convergence and
communication costs.Comment: 43 pages, 11 figure
Tensor and Matrix Inversions with Applications
Higher order tensor inversion is possible for even order. We have shown that
a tensor group endowed with the Einstein (contracted) product is isomorphic to
the general linear group of degree . With the isomorphic group structures,
we derived new tensor decompositions which we have shown to be related to the
well-known canonical polyadic decomposition and multilinear SVD. Moreover,
within this group structure framework, multilinear systems are derived,
specifically, for solving high dimensional PDEs and large discrete quantum
models. We also address multilinear systems which do not fit the framework in
the least-squares sense, that is, when the tensor has an odd number of modes or
when the tensor has distinct dimensions in each modes. With the notion of
tensor inversion, multilinear systems are solvable. Numerically we solve
multilinear systems using iterative techniques, namely biconjugate gradient and
Jacobi methods in tensor format
- …