8 research outputs found

    Sparse recovery using sparse matrices

    Get PDF
    We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a high-dimensional vector x from its lower-dimensional sketch Ax. A popular way of performing this recovery is by finding x* such that Ax=Ax*, and ||x*||_1 is minimal. It is known that this approach ``works'' if A is a random *dense* matrix, chosen from a proper distribution.In this paper, we investigate this procedure for the case where A is binary and *very sparse*. We show that, both in theory and in practice, sparse matrices are essentially as ``good'' as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time

    Advances in sparse signal recovery methods

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 96-101).The general problem of obtaining a useful succinct representation (sketch) of some piece of data is ubiquitous; it has applications in signal acquisition, data compression, sub-linear space algorithms, etc. In this thesis we focus on sparse recovery, where the goal is to recover sparse vectors exactly, and to approximately recover nearly-sparse vectors. More precisely, from the short representation of a vector x, we want to recover a vector x* such that the approximation error ... is comparable to the "tail" min[subscript x'] ... where x' ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years, notably in areas such as data stream computing and compressed sensing. We consider two types of sketches: linear and non-linear. For the linear sketching case, where the compressed representation of x is Ax for a measurement matrix A, we introduce a class of binary sparse matrices as valid measurement matrices. We show that they can be used with the popular geometric " 1 minimization" recovery procedure. We also present two iterative recovery algorithms, Sparse Matching Pursuit and Sequential Sparse Matching Pursuit, that can be used with the same matrices. Thanks to the sparsity of the matrices, the resulting algorithms are much more efficient than the ones previously known, while maintaining high quality of recovery. We also show experiments which establish the practicality of these algorithms. For the non-linear case, we present a better analysis of a class of counter algorithms which process large streams of items and maintain enough data to approximately recover the item frequencies. The class includes the popular FREQUENT and SPACESAVING algorithms. We show that the errors in the approximations generated by these algorithms do not grow with the frequencies of the most frequent elements, but only depend on the remaining "tail" of the frequency vector. Therefore, they provide a non-linear sparse recovery scheme, achieving compression rates that are an order of magnitude better than their linear counterparts.by Radu Berinde.M.Eng

    THE STUDY, DESIGN AND TESTING OF A LINEAR OSCILLATING GENERATOR WITH MOVING PERMANENT MAGNETS

    Get PDF
    This paper presents a study, design and testing of a Linear Oscillating Generator. There are presented the main steps of the magnetic and electric calculations for a permanent magnet linear alternator of fixed coil and moving magnets type. Finally it has been shown the comparative analysis between the linear oscillating generator with moving permanent magnets in no load operation and load operation

    Space-optimal Heavy Hitters with Strong Error Bounds

    Get PDF
    The problem of finding heavy hitters and approximating the frequencies of items is at the heart of many problems in data stream analysis. It has been observed that several proposed solutions to this problem can outperform their worst-case guarantees on real data. This leads to the question of whether some stronger bounds can be guaranteed. We answer this in the positive by showing that a class of "counter-based algorithms" (including the popular and very space-efficient FREQUENT and SPACESAVING algorithms) provide much stronger approximation guarantees than previously known. Specifically, we show that errors in the approximation of individual elements do not depend on the frequencies of the most frequent elements, but only on the frequency of the remaining "tail." This shows that counter-based methods are the most space-efficient (in fact, space-optimal) algorithms having this strong error bound. This tail guarantee allows these algorithms to solve the "sparse recovery" problem. Here, the goal is to recover a faithful representation of the vector of frequencies, f. We prove that using space O(k), the algorithms construct an approximation f* to the frequency vector f so that the L1 error ||f -- f*||[subscript 1] is close to the best possible error min[subscript f2] ||f2 -- f||[subscript 1], where f2 ranges over all vectors with at most k non-zero entries. This improves the previously best known space bound of about O(k log n) for streams without element deletions (where n is the size of the domain from which stream elements are drawn). Other consequences of the tail guarantees are results for skewed (Zipfian) data, and guarantees for accuracy of merging multiple summarized streams.David & Lucile Packard Foundation (Fellowship)Center for Massive Data Algorithmics (MADALGO)National Science Foundation (U.S.). (Grant number CCF-0728645

    Sequential sparse matching pursuit

    No full text
    We propose a new algorithm, called sequential sparse matching pursuit (SSMP), for solving sparse recovery problems. The algorithm provably recovers a k-sparse approximation to an arbitrary n-dimensional signal vector x from only O(k log(n/k)) linear measurements of x. The recovery process takes time that is only near-linear in n. Preliminary experiments indicate that the algorithm works well on synthetic and image data, with the recovery quality often outperforming that of more complex algorithms, such as à ¿1 minimization
    corecore