30 research outputs found
Rectangular Full Packed Format for Cholesky's Algorithm: Factorization, Solution and Inversion
We describe a new data format for storing triangular, symmetric, and
Hermitian matrices called RFPF (Rectangular Full Packed Format). The standard
two dimensional arrays of Fortran and C (also known as full format) that are
used to represent triangular and symmetric matrices waste nearly half of the
storage space but provide high performance via the use of Level 3 BLAS.
Standard packed format arrays fully utilize storage (array space) but provide
low performance as there is no Level 3 packed BLAS. We combine the good
features of packed and full storage using RFPF to obtain high performance via
using Level 3 BLAS as RFPF is a standard full format representation. Also, RFPF
requires exactly the same minimal storage as packed format. Each LAPACK full
and/or packed triangular, symmetric, and Hermitian routine becomes a single new
RFPF routine based on eight possible data layouts of RFPF. This new RFPF
routine usually consists of two calls to the corresponding LAPACK full format
routine and two calls to Level 3 BLAS routines. This means {\it no} new
software is required. As examples, we present LAPACK routines for Cholesky
factorization, Cholesky solution and Cholesky inverse computation in RFPF to
illustrate this new work and to describe its performance on several commonly
used computer platforms. Performance of LAPACK full routines using RFPF versus
LAPACK full routines using standard format for both serial and SMP parallel
processing is about the same while using half the storage. Performance gains
are roughly one to a factor of 43 for serial and one to a factor of 97 for SMP
parallel times faster using vendor LAPACK full routines with RFPF than with
using vendor and/or reference packed routines
On the Efficacy and High-Performance Implementation of Quaternion Matrix Multiplication
Quaternion symmetry is ubiquitous in the physical sciences. As such, much
work has been afforded over the years to the development of efficient schemes
to exploit this symmetry using real and complex linear algebra. Recent years
have also seen many advances in the formal theoretical development of
explicitly quaternion linear algebra with promising applications in image
processing and machine learning. Despite these advances, there do not currently
exist optimized software implementations of quaternion linear algebra. The
leverage of optimized linear algebra software is crucial in the achievement of
high levels of performance on modern computing architectures, and thus provides
a central tool in the development of high-performance scientific software. In
this work, a case will be made for the efficacy of high-performance quaternion
linear algebra software for appropriate problems. In this pursuit, an optimized
software implementation of quaternion matrix multiplication will be presented
and will be shown to outperform a vendor tuned implementation for the analogous
complex matrix operation. The results of this work pave the path for further
development of high-performance quaternion linear algebra software which will
improve the performance of the next generation of applicable scientific
applications
Recommended from our members
Towards an Accurate Performance Modeling of Parallel SparseFactorization
We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR
Research in Applied Mathematics, Fluid Mechanics and Computer Science
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1998 through March 31, 1999
A framework for efficient execution of matrix computations
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear systems of equations is a very frequent operation in many fields in science, engineering, surveying, physics and others. Other matrix operations occur frequently in many other fields such as pattern recognition and classification, or multimedia applications. Therefore, it is important to perform matrix operations efficiently. The work in this thesis focuses on the efficient execution on commodity processors of matrix operations which arise frequently in different fields.We study some important operations which appear in the solution of real world problems: some sparse and dense linear algebra codes and a classification algorithm. In particular, we focus our attention on the efficient execution of the following operations: sparse Cholesky factorization; dense matrix multiplication; dense Cholesky factorization; and Nearest Neighbor Classification.A lot of research has been conducted on the efficient parallelization of numerical algorithms. However, the efficiency of a parallel algorithm depends ultimately on the performance obtained from the computations performed on each node. The work presented in this thesis focuses on the sequential execution on a single processor.There exists a number of data structures for sparse computations which can be used in order to avoid the storage of and computation on zero elements. We work with a hierarchical data structure known as hypermatrix. A matrix is subdivided recursively an arbitrary number of times. Several pointer matrices are used to store the location ofsubmatrices at each level. The last level consists of data submatrices which are dealt with as dense submatrices. When the block size of this dense submatrices is small, the number of zeros can be greatly reduced. However, the performance obtained from BLAS3 routines drops heavily. Consequently, there is a trade-off in the size of data submatrices used for a sparse Cholesky factorization with the hypermatrix scheme. Our goal is that of reducing the overhead introduced by the unnecessary operation on zeros when a hypermatrix data structure is used to produce a sparse Cholesky factorization. In this work we study several techniques for reducing such overhead in order to obtain high performance.One of our goals is the creation of codes which work efficiently on different platforms when operating on dense matrices. To obtain high performance, the resources offered by the CPU must be properly utilized. At the same time, the memory hierarchy must be exploited to tolerate increasing memory latencies. To achieve the former, we produce inner kernels which use the CPU very efficiently. To achieve the latter, we investigate nonlinear data layouts. Such data formats can contribute to the effective use of the memory system.The use of highly optimized inner kernels is of paramount importance for obtaining efficient numerical algorithms. Often, such kernels are created by hand. However, we want to create efficient inner kernels for a variety of processors using a general approach and avoiding hand-made codification in assembly language. In this work, we present an alternative way to produce efficient kernels automatically, based on a set of simple codes written in a high level language, which can be parameterized at compilation time. The advantage of our method lies in the ability to generate very efficient inner kernels by means of a good compiler. Working on regular codes for small matrices most of the compilers we used in different platforms were creating very efficient inner kernels for matrix multiplication. Using the resulting kernels we have been able to produce high performance sparse and dense linear algebra codes on a variety of platforms.In this work we also show that techniques used in linear algebra codes can be useful in other fields. We present the work we have done in the optimization of the Nearest Neighbor classification focusing on the speed of the classification process.Tuning several codes for different problems and machines can become a heavy and unbearable task. For this reason we have developed an environment for development and automatic benchmarking of codes which is presented in this thesis.As a practical result of this work, we have been able to create efficient codes for several matrix operations on a variety of platforms. Our codes are highly competitive with other state-of-art codes for some problems