5 research outputs found

    A Quasi-Random Approach to Matrix Spectral Analysis

    Get PDF
    Inspired by the quantum computing algorithms for Linear Algebra problems [HHL,TaShma] we study how the simulation on a classical computer of this type of "Phase Estimation algorithms" performs when we apply it to solve the Eigen-Problem of Hermitian matrices. The result is a completely new, efficient and stable, parallel algorithm to compute an approximate spectral decomposition of any Hermitian matrix. The algorithm can be implemented by Boolean circuits in O(log2n)O(\log^2 n) parallel time with a total cost of O(nω+1)O(n^{\omega+1}) Boolean operations. This Boolean complexity matches the best known rigorous O(log2n)O(\log^2 n) parallel time algorithms, but unlike those algorithms our algorithm is (logarithmically) stable, so further improvements may lead to practical implementations. All previous efficient and rigorous approaches to solve the Eigen-Problem use randomization to avoid bad condition as we do too. Our algorithm makes further use of randomization in a completely new way, taking random powers of a unitary matrix to randomize the phases of its eigenvalues. Proving that a tiny Gaussian perturbation and a random polynomial power are sufficient to ensure almost pairwise independence of the phases (mod(2π))(\mod (2\pi)) is the main technical contribution of this work. This randomization enables us, given a Hermitian matrix with well separated eigenvalues, to sample a random eigenvalue and produce an approximate eigenvector in O(log2n)O(\log^2 n) parallel time and O(nω)O(n^\omega) Boolean complexity. We conjecture that further improvements of our method can provide a stable solution to the full approximate spectral decomposition problem with complexity similar to the complexity (up to a logarithmic factor) of sampling a single eigenvector.Comment: Replacing previous version: parallel algorithm runs in total complexity nω+1n^{\omega+1} and not nωn^{\omega}. However, the depth of the implementing circuit is log2(n)\log^2(n): hence comparable to fastest eigen-decomposition algorithms know

    Parallel Complexity of Numerically Accurate Linear System Solvers

    No full text
    We prove a number of negative results about practical (i.e., work efficient and numerically accurate) algorithms for computing the main matrix factorizations. In particular, we prove that the popular Householder and Givens methods for computing the QR decomposition are P-complete, and hence presumably inherently sequential, under both real and floating point number models. We also prove that Gaussian elimination (GE) with a weak form of pivoting, which aims only at making the resulting algorithm nondegenerate, is likely to be inherently sequential as well. Finally, we prove that GE with partial pivoting is P-complete over GF(2) or when restricted to symmetric positive definite matrices, for which it is known that even standard GE (no pivoting) does not fail. Altogether, the results of this paper give further formal support to the widespread belief that there is a tradeoff between parallelism and accuracy in numerical algorithms

    Parallel Complexity of Numerically Accurate Linear System Solvers

    No full text
    P LU QR LU NC 1 Introduction strongly nonsingular Mauro Leoncini Giovanni Manzini Luciano Margara August 8, 1997 Parallel Complexity of Numerically Accurate Linear System Solvers. This work merges preliminary results presented at ESA `96 and SPAA `97. Dipartimento di Informatica, Universit`a di Pisa, Corso Italia 40, 56125 Pisa, Italy, and IMC-CNR. via S. Maria 46, 56126 Pisa, Italy. Email: [email protected]. Supported by Murst 40% funds. Dipartimento di Scienze e Tecnologie Avanzate, Universit`a di Torino, Via Cavour 84, 15100 Alessandria, Italy. Email: [email protected]. Dipartimento Scienze dell'Informazione, Universit`a di Bologna, Piazza Porta S. Donato 5, 40127 Bologna, Italy. Email: [email protected]. Matrix factorization algorithms form the backbone of state-of-the-art numerical libraries and packages, such as LAPACK and MATLAB [2, 14]. Indeed, factoring a matrix is almost always the first step of many scientific computations, and usually the one which places the heavie..

    Parallel Complexity of Numerically Accurate Linear System Solvers

    No full text
    corecore