17,816 research outputs found

    PRISM: Sparse Recovery of the Primordial Power Spectrum

    Get PDF
    The primordial power spectrum describes the initial perturbations in the Universe which eventually grew into the large-scale structure we observe today, and thereby provides an indirect probe of inflation or other structure-formation mechanisms. Here, we introduce a new method to estimate this spectrum from the empirical power spectrum of cosmic microwave background (CMB) maps. A sparsity-based linear inversion method, coined \textbf{PRISM}, is presented. This technique leverages a sparsity prior on features in the primordial power spectrum in a wavelet basis to regularise the inverse problem. This non-parametric approach does not assume a strong prior on the shape of the primordial power spectrum, yet is able to correctly reconstruct its global shape as well as localised features. These advantages make this method robust for detecting deviations from the currently favoured scale-invariant spectrum. We investigate the strength of this method on a set of WMAP 9-year simulated data for three types of primordial power spectra: a nearly scale-invariant spectrum, a spectrum with a small running of the spectral index, and a spectrum with a localised feature. This technique proves to easily detect deviations from a pure scale-invariant power spectrum and is suitable for distinguishing between simple models of the inflation. We process the WMAP 9-year data and find no significant departure from a nearly scale-invariant power spectrum with the spectral index ns=0.972n_s = 0.972. A high resolution primordial power spectrum can be reconstructed with this technique, where any strong local deviations or small global deviations from a pure scale-invariant spectrum can easily be detected

    Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    Get PDF
    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non-negative amplitude parameters to arbitrary complex ones, and (ii) we allow for mismatch between the manifold described by the parameters and its polar approximation. To quantify the improvements afforded by the proposed extensions, we evaluate six algorithms for estimation of parameters in sparse translation-invariant signals, exemplified with the time delay estimation problem. The evaluation is based on three performance metrics: estimator precision, sampling rate and computational complexity. We use compressive sensing with all the algorithms to lower the necessary sampling rate and show that it is still possible to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super-resolution algorithm. The algorithms studied here provide various tradeoffs between computational complexity, estimation precision, and necessary sampling rate. The work shows that compressive sensing for the class of sparse translation-invariant signals allows for a decrease in sampling rate and that the use of polar interpolation increases the estimation precision.Comment: 13 pages, 5 figures, to appear in IEEE Transactions on Signal Processing; minor edits and correction

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Compressive Sensing Using Iterative Hard Thresholding with Low Precision Data Representation: Theory and Applications

    Full text link
    Modern scientific instruments produce vast amounts of data, which can overwhelm the processing ability of computer systems. Lossy compression of data is an intriguing solution, but comes with its own drawbacks, such as potential signal loss, and the need for careful optimization of the compression ratio. In this work, we focus on a setting where this problem is especially acute: compressive sensing frameworks for interferometry and medical imaging. We ask the following question: can the precision of the data representation be lowered for all inputs, with recovery guarantees and practical performance? Our first contribution is a theoretical analysis of the normalized Iterative Hard Thresholding (IHT) algorithm when all input data, meaning both the measurement matrix and the observation vector are quantized aggressively. We present a variant of low precision normalized {IHT} that, under mild conditions, can still provide recovery guarantees. The second contribution is the application of our quantization framework to radio astronomy and magnetic resonance imaging. We show that lowering the precision of the data can significantly accelerate image recovery. We evaluate our approach on telescope data and samples of brain images using CPU and FPGA implementations achieving up to a 9x speed-up with negligible loss of recovery quality.Comment: 19 pages, 5 figures, 1 table, in IEEE Transactions on Signal Processin

    Image registration with sparse approximations in parametric dictionaries

    Get PDF
    We examine in this paper the problem of image registration from the new perspective where images are given by sparse approximations in parametric dictionaries of geometric functions. We propose a registration algorithm that looks for an estimate of the global transformation between sparse images by examining the set of relative geometrical transformations between the respective features. We propose a theoretical analysis of our registration algorithm and we derive performance guarantees based on two novel important properties of redundant dictionaries, namely the robust linear independence and the transformation inconsistency. We propose several illustrations and insights about the importance of these dictionary properties and show that common properties such as coherence or restricted isometry property fail to provide sufficient information in registration problems. We finally show with illustrative experiments on simple visual objects and handwritten digits images that our algorithm outperforms baseline competitor methods in terms of transformation-invariant distance computation and classification

    CoSaMP: Iterative signal recovery from incomplete and inaccurate samples

    Get PDF
    Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For many cases of interest, the running time is just O(N*log^2(N)), where N is the length of the signal.Comment: 30 pages. Revised. Presented at Information Theory and Applications, 31 January 2008, San Dieg
    • …
    corecore