47,161 research outputs found

    Expander 0\ell_0-Decoding

    Get PDF
    We introduce two new algorithms, Serial-0\ell_0 and Parallel-0\ell_0 for solving a large underdetermined linear system of equations y=AxRmy = Ax \in \mathbb{R}^m when it is known that xRnx \in \mathbb{R}^n has at most k<mk < m nonzero entries and that AA is the adjacency matrix of an unbalanced left dd-regular expander graph. The matrices in this class are sparse and allow a highly efficient implementation. A number of algorithms have been designed to work exclusively under this setting, composing the branch of combinatorial compressed-sensing (CCS). Serial-0\ell_0 and Parallel-0\ell_0 iteratively minimise yAx^0\|y - A\hat x\|_0 by successfully combining two desirable features of previous CCS algorithms: the information-preserving strategy of ER, and the parallel updating mechanism of SMP. We are able to link these elements and guarantee convergence in O(dnlogk)\mathcal{O}(dn \log k) operations by assuming that the signal is dissociated, meaning that all of the 2k2^k subset sums of the support of xx are pairwise different. However, we observe empirically that the signal need not be exactly dissociated in practice. Moreover, we observe Serial-0\ell_0 and Parallel-0\ell_0 to be able to solve large scale problems with a larger fraction of nonzeros than other algorithms when the number of measurements is substantially less than the signal length; in particular, they are able to reliably solve for a kk-sparse vector xRnx\in\mathbb{R}^n from mm expander measurements with n/m=103n/m=10^3 and k/mk/m up to four times greater than what is achievable by 1\ell_1-regularization from dense Gaussian measurements. Additionally, Serial-0\ell_0 and Parallel-0\ell_0 are observed to be able to solve large problems sizes in substantially less time than other algorithms for compressed sensing. In particular, Parallel-0\ell_0 is structured to take advantage of massively parallel architectures.Comment: 14 pages, 10 figure

    Fast Computation of Smith Forms of Sparse Matrices Over Local Rings

    Full text link
    We present algorithms to compute the Smith Normal Form of matrices over two families of local rings. The algorithms use the \emph{black-box} model which is suitable for sparse and structured matrices. The algorithms depend on a number of tools, such as matrix rank computation over finite fields, for which the best-known time- and memory-efficient algorithms are probabilistic. For an \nxn matrix AA over the ring \Fzfe, where fef^e is a power of an irreducible polynomial f \in \Fz of degree dd, our algorithm requires \bigO(\eta de^2n) operations in \F, where our black-box is assumed to require \bigO(\eta) operations in \F to compute a matrix-vector product by a vector over \Fzfe (and η\eta is assumed greater than \Pden). The algorithm only requires additional storage for \bigO(\Pden) elements of \F. In particular, if \eta=\softO(\Pden), then our algorithm requires only \softO(n^2d^2e^3) operations in \F, which is an improvement on known dense methods for small dd and ee. For the ring \ZZ/p^e\ZZ, where pp is a prime, we give an algorithm which is time- and memory-efficient when the number of nontrivial invariant factors is small. We describe a method for dimension reduction while preserving the invariant factors. The time complexity is essentially linear in μnrelogp,\mu n r e \log p, where μ\mu is the number of operations in \ZZ/p\ZZ to evaluate the black-box (assumed greater than nn) and rr is the total number of non-zero invariant factors. To avoid the practical cost of conditioning, we give a Monte Carlo certificate, which at low cost, provides either a high probability of success or a proof of failure. The quest for a time- and memory-efficient solution without restrictions on the number of nontrivial invariant factors remains open. We offer a conjecture which may contribute toward that end.Comment: Preliminary version to appear at ISSAC 201

    A local construction of the Smith normal form of a matrix polynomial

    Get PDF
    We present an algorithm for computing a Smith form with multipliers of a regular matrix polynomial over a field. This algorithm differs from previous ones in that it computes a local Smith form for each irreducible factor in the determinant separately and then combines them into a global Smith form, whereas other algorithms apply a sequence of unimodular row and column operations to the original matrix. The performance of the algorithm in exact arithmetic is reported for several test cases.Comment: 26 pages, 6 figures; introduction expanded, 10 references added, two additional tests performe

    Generic design of Chinese remaindering schemes

    Get PDF
    We propose a generic design for Chinese remainder algorithms. A Chinese remainder computation consists in reconstructing an integer value from its residues modulo non coprime integers. We also propose an efficient linear data structure, a radix ladder, for the intermediate storage and computations. Our design is structured into three main modules: a black box residue computation in charge of computing each residue; a Chinese remaindering controller in charge of launching the computation and of the termination decision; an integer builder in charge of the reconstruction computation. We then show that this design enables many different forms of Chinese remaindering (e.g. deterministic, early terminated, distributed, etc.), easy comparisons between these forms and e.g. user-transparent parallelism at different parallel grains

    An introspective algorithm for the integer determinant

    Full text link
    We present an algorithm computing the determinant of an integer matrix A. The algorithm is introspective in the sense that it uses several distinct algorithms that run in a concurrent manner. During the course of the algorithm partial results coming from distinct methods can be combined. Then, depending on the current running time of each method, the algorithm can emphasize a particular variant. With the use of very fast modular routines for linear algebra, our implementation is an order of magnitude faster than other existing implementations. Moreover, we prove that the expected complexity of our algorithm is only O(n^3 log^{2.5}(n ||A||)) bit operations in the dense case and O(Omega n^{1.5} log^2(n ||A||) + n^{2.5}log^3(n||A||)) in the sparse case, where ||A|| is the largest entry in absolute value of the matrix and Omega is the cost of matrix-vector multiplication in the case of a sparse matrix.Comment: Published in Transgressive Computing 2006, Grenade : Espagne (2006
    corecore