12,750 research outputs found

    Matrix Shanks Transformations

    Get PDF
    Shanks' transformation is a well know sequence transformation for accelerating the convergence of scalar sequences. It has been extended to the case of sequences of vectors and sequences of square matrices satisfying a linear difference equation with scalar coefficients. In this paper, a more general extension to the matrix case where the matrices can be rectangular and satisfy a difference equation with matrix coefficients is proposed and studied. In the particular case of square matrices, the new transformation can be recursively implemented by the matrix arepsilonarepsilon-algorithm of Wynn. Then, the transformation is related to matrix Pad\ue9-type and Pad\ue9 approximants. Numerical experiments showing the interest of this transformation end the paper

    Quantum line bundles on noncommutative sphere

    Full text link
    Noncommutative (NC) sphere is introduced as a quotient of the enveloping algebra of the Lie algebra su(2). Using the Cayley-Hamilton identities we introduce projective modules which are analogues of line bundles on the usual sphere (we call them quantum line bundles) and define a multiplicative structure in their family. Also, we compute a pairing between certain quantum line bundles and finite dimensional representations of the NC sphere in the spirit of the NC index theorem. A new approach to constructing the differential calculus on a NC sphere is suggested. The approach makes use of the projective modules in question and gives rise to a NC de Rham complex being a deformation of the classical one.Comment: LaTeX file, 15 pp, no figures. Some clarifying remarks are added at the beginning of section 2 and into section

    Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation

    Full text link
    Across a variety of scientific disciplines, sparse inverse covariance estimation is a popular tool for capturing the underlying dependency relationships in multivariate data. Unfortunately, most estimators are not scalable enough to handle the sizes of modern high-dimensional data sets (often on the order of terabytes), and assume Gaussian samples. To address these deficiencies, we introduce HP-CONCORD, a highly scalable optimization method for estimating a sparse inverse covariance matrix based on a regularized pseudolikelihood framework, without assuming Gaussianity. Our parallel proximal gradient method uses a novel communication-avoiding linear algebra algorithm and runs across a multi-node cluster with up to 1k nodes (24k cores), achieving parallel scalability on problems with up to ~819 billion parameters (1.28 million dimensions); even on a single node, HP-CONCORD demonstrates scalability, outperforming a state-of-the-art method. We also use HP-CONCORD to estimate the underlying dependency structure of the brain from fMRI data, and use the result to identify functional regions automatically. The results show good agreement with a clustering from the neuroscience literature.Comment: Main paper: 15 pages, appendix: 24 page

    Memory architecture

    Get PDF
    A memory architecture is presented. The memory architecture comprises a first memory and a second memory. The first memory has at least a bank with a first width addressable by a single address. The second memory has a plurality of banks of a second width, said banks being addressable by components of an address vector. The second width is at most half of the first width. The first memory and the second memory are coupled selectively and said first memory and second memory are addressable by an address space. The invention further provides a method for transposing a matrix using the memory architecture comprising following steps. In the first step the matrix elements are moved from the first memory to the second memory. In the second step a set of elements arranged along a warped diagonal of the matrix is loaded into a register. In the fourth step the set of elements stored in the register are rotated until the element originating from the first row of the matrix is in a first location of the register. In the fifth step the rotated set of elements are stored in the second memory to obtain a transposed warped diagonal. The second to fifth steps are repeated with the subsequent warp diagonals until matrix transposition is complete

    Phase matrix induced symmetrics for multiple scattering using the matrix operator method

    Get PDF
    Entirely rigorous proofs of the symmetries induced by the phase matrix into the reflection and transmission operators used in the matrix operator theory are given. Results are obtained for multiple scattering in both homogeneous and inhomogeneous atmospheres. These results will be useful to researchers using the method since large savings in computer time and storage are obtainable
    • …
    corecore