652 research outputs found

    GPU-Accelerated Algorithms for Compressed Signals Recovery with Application to Astronomical Imagery Deblurring

    Get PDF
    Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain

    Scalable iterative methods for sampling from massive Gaussian random vectors

    Full text link
    Sampling from Gaussian Markov random fields (GMRFs), that is multivariate Gaussian ran- dom vectors that are parameterised by the inverse of their covariance matrix, is a fundamental problem in computational statistics. In this paper, we show how we can exploit arbitrarily accu- rate approximations to a GMRF to speed up Krylov subspace sampling methods. We also show that these methods can be used when computing the normalising constant of a large multivariate Gaussian distribution, which is needed for both any likelihood-based inference method. The method we derive is also applicable to other structured Gaussian random vectors and, in particu- lar, we show that when the precision matrix is a perturbation of a (block) circulant matrix, it is still possible to derive O(n log n) sampling schemes.Comment: 17 Pages, 4 Figure

    Computation- and Space-Efficient Implementation of SSA

    Full text link
    The computational complexity of different steps of the basic SSA is discussed. It is shown that the use of the general-purpose "blackbox" routines (e.g. found in packages like LAPACK) leads to huge waste of time resources since the special Hankel structure of the trajectory matrix is not taken into account. We outline several state-of-the-art algorithms (for example, Lanczos-based truncated SVD) which can be modified to exploit the structure of the trajectory matrix. The key components here are hankel matrix-vector multiplication and hankelization operator. We show that both can be computed efficiently by the means of Fast Fourier Transform. The use of these methods yields the reduction of the worst-case computational complexity from O(N^3) to O(k N log(N)), where N is series length and k is the number of eigentriples desired.Comment: 27 pages, 8 figure
    • …
    corecore