1,813 research outputs found

    Compressive Sensing Using Iterative Hard Thresholding with Low Precision Data Representation: Theory and Applications

    Full text link
    Modern scientific instruments produce vast amounts of data, which can overwhelm the processing ability of computer systems. Lossy compression of data is an intriguing solution, but comes with its own drawbacks, such as potential signal loss, and the need for careful optimization of the compression ratio. In this work, we focus on a setting where this problem is especially acute: compressive sensing frameworks for interferometry and medical imaging. We ask the following question: can the precision of the data representation be lowered for all inputs, with recovery guarantees and practical performance? Our first contribution is a theoretical analysis of the normalized Iterative Hard Thresholding (IHT) algorithm when all input data, meaning both the measurement matrix and the observation vector are quantized aggressively. We present a variant of low precision normalized {IHT} that, under mild conditions, can still provide recovery guarantees. The second contribution is the application of our quantization framework to radio astronomy and magnetic resonance imaging. We show that lowering the precision of the data can significantly accelerate image recovery. We evaluate our approach on telescope data and samples of brain images using CPU and FPGA implementations achieving up to a 9x speed-up with negligible loss of recovery quality.Comment: 19 pages, 5 figures, 1 table, in IEEE Transactions on Signal Processin

    How to find real-world applications for compressive sensing

    Full text link
    The potential of compressive sensing (CS) has spurred great interest in the research community and is a fast growing area of research. However, research translating CS theory into practical hardware and demonstrating clear and significant benefits with this hardware over current, conventional imaging techniques has been limited. This article helps researchers to find those niche applications where the CS approach provides substantial gain over conventional approaches by articulating lessons learned in finding one such application; sea skimming missile detection. As a proof of concept, it is demonstrated that a simplified CS missile detection architecture and algorithm provides comparable results to the conventional imaging approach but using a smaller FPA. The primary message is that all of the excitement surrounding CS is necessary and appropriate for encouraging our creativity but we all must also take off our "rose colored glasses" and critically judge our ideas, methods and results relative to conventional imaging approaches.Comment: 10 page

    The application of compressive sampling to radio astronomy I: Deconvolution

    Full text link
    Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the H\"ogbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the H\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.Comment: Published by A&A, Matlab code can be found: http://code.google.com/p/csra/download

    On sparsity averaging

    Get PDF
    Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013) introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted 1\ell_1 scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.Comment: 4 pages, 3 figures, Proceedings of 10th International Conference on Sampling Theory and Applications (SampTA), Code available at https://github.com/basp-group/sopt, Full journal letter available at http://arxiv.org/abs/arXiv:1208.233

    Robust sparse image reconstruction of radio interferometric observations with purify

    Get PDF
    Next-generation radio interferometers, such as the Square Kilometre Array (SKA), will revolutionise our understanding of the universe through their unprecedented sensitivity and resolution. However, to realise these goals significant challenges in image and data processing need to be overcome. The standard methods in radio interferometry for reconstructing images, such as CLEAN, have served the community well over the last few decades and have survived largely because they are pragmatic. However, they produce reconstructed inter\-ferometric images that are limited in quality and scalability for big data. In this work we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers (P-ADMM) algorithm presented in a recent article. First, we assess the impact of the interpolation kernel used to perform gridding and degridding on sparse image reconstruction. We find that the Kaiser-Bessel interpolation kernel performs as well as prolate spheroidal wave functions, while providing a computational saving and an analytic form. Second, we apply PURIFY to real interferometric observations from the Very Large Array (VLA) and the Australia Telescope Compact Array (ATCA) and find images recovered by PURIFY are higher quality than those recovered by CLEAN. Third, we discuss how PURIFY reconstructions exhibit additional advantages over those recovered by CLEAN. The latest version of PURIFY, with developments presented in this work, is made publicly available.Comment: 22 pages, 10 figures, PURIFY code available at http://basp-group.github.io/purif

    Compressive Wavefront Sensing with Weak Values

    Get PDF
    We demonstrate a wavefront sensor based on the compressive sensing, single-pixel camera. Using a high-resolution spatial light modulator (SLM) as a variable waveplate, we weakly couple an optical field's transverse-position and polarization degrees of freedom. By placing random, binary patterns on the SLM, polarization serves as a meter for directly measuring random projections of the real and imaginary components of the wavefront. Compressive sensing techniques can then recover the wavefront. We acquire high quality, 256x256 pixel images of the wavefront from only 10,000 projections. Photon-counting detectors give sub-picowatt sensitivity

    PURIFY: a new algorithmic framework for next-generation radio-interferometric imaging

    Get PDF
    In recent works, compressed sensing (CS) and convex opti- mization techniques have been applied to radio-interferometric imaging showing the potential to outperform state-of-the-art imaging algorithms in the field. We review our latest contributions [1, 2, 3], which leverage the versatility of convex optimization to both handle realistic continuous visibilities and offer a highly parallelizable structure paving the way to significant acceleration of the reconstruction and high-dimensional data scalability. The new algorithmic structure promoted in a new software PURIFY (beta version) relies on the simultaneous-direction method of multipliers (SDMM). The performance of various sparsity priors is evaluated through simulations in the continuous visibility setting, confirming the superiority of our recent average sparsity approach SARA
    corecore