4 research outputs found

    On diagonally structured matrix computation

    Get PDF
    In this thesis, we have proposed efficient implementations of linear algebra kernels such as matrix-vector and matrix-matrix multiplications by formulating arithmetic calculations in terms of diagonals and thereby giving an orientation-neutral (column-/row-major layout) computational scheme. Matrix elements are accessed with stride-1 and no indirect referencing is involved. Access to the transposed matrix requires no additional effort. The proposed storage scheme handles dense matrices and matrices with special structures such as banded, symmetric in a uniform manner. Test results from numerical experiments with OpenMP implementation are promising. We also show that, using our diagonal framework, Java native arrays can yield superior computational performance. We present two alternative implementations for matrix-matrix multiplication operation in Java. The results from numerical testing demonstrate the advantage of our proposed methods

    A Computational study of sparse or structured matrix operations

    Get PDF
    Matrix computation is an important area in high-performance scientific computing. Major computer manufacturers and vendors typically provide architecture- aware implementation libraries such as Basic Linear Algebra Subroutines (BLAS). In this thesis, we perform an experimental study of a subset of matrix operations, where the matrices are dense, sparse, or structured in Java. We implement a subset of BLAS operations in Java and compare their performance with standard data structures Compressed Row Storage (CRS) and Java Sparse Array (JSA) for dense and sparse structured matrices. The diagonal storage format is shown to be a viable alternative for dense and structured matrices

    Supporting multidimensional arrays in Java

    No full text
    corecore