3,843 research outputs found

    On the effect of image denoising on galaxy shape measurements

    Full text link
    Weak gravitational lensing is a very sensitive way of measuring cosmological parameters, including dark energy, and of testing current theories of gravitation. In practice, this requires exquisite measurement of the shapes of billions of galaxies over large areas of the sky, as may be obtained with the EUCLID and WFIRST satellites. For a given survey depth, applying image denoising to the data both improves the accuracy of the shape measurements and increases the number density of galaxies with a measurable shape. We perform simple tests of three different denoising techniques, using synthetic data. We propose a new and simple denoising method, based on wavelet decomposition of the data and a Wiener filtering of the resulting wavelet coefficients. When applied to the GREAT08 challenge dataset, this technique allows us to improve the quality factor of the measurement (Q; GREAT08 definition), by up to a factor of two. We demonstrate that the typical pixel size of the EUCLID optical channel will allow us to use image denoising.Comment: Accepted for publication in A&A. 8 pages, 5 figure

    The curvelet transform for image denoising

    Get PDF
    We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a` trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement

    Wavelet-based denoising by customized thresholding

    Get PDF
    The problem of estimating a signal that is corrupted by additive noise has been of interest to many researchers for practical as well as theoretical reasons. Many of the traditional denoising methods have been using linear methods such as the Wiener filtering. Recently, nonlinear methods, especially those based on wavelets have become increasingly popular, due to a number of advantages over the linear methods. It has been shown that wavelet-thresholding has near-optimal properties in the minimax sense, and guarantees better rate of convergence, despite its simplicity. Even though much work has been done in the field of wavelet-thresholding, most of it was focused on statistical modeling of the wavelet coefficients and the optimal choice of the thresholds. In this paper, we propose a custom thresholding function which can improve the denoised results significantly. Simulation results are given to demonstrate the advantage of the new thresholding function

    Compressive Imaging via Approximate Message Passing with Image Denoising

    Full text link
    We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reconstruct the original input signal well, a good denoiser must be used. We apply two wavelet based image denoisers within AMP. The first denoiser is the "amplitude-scaleinvariant Bayes estimator" (ABE), and the second is an adaptive Wiener filter; we call our AMP based algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results show that both AMP-ABE and AMP-Wiener significantly improve over the state of the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener offers lower mean square error (MSE) than existing compressive imaging algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise as well as the adaptive Wiener filter.Comment: 15 pages; 2 tables; 7 figures; to appear in IEEE Trans. Signal Proces
    • …
    corecore