95 research outputs found

    On the effect of image denoising on galaxy shape measurements

    Full text link
    Weak gravitational lensing is a very sensitive way of measuring cosmological parameters, including dark energy, and of testing current theories of gravitation. In practice, this requires exquisite measurement of the shapes of billions of galaxies over large areas of the sky, as may be obtained with the EUCLID and WFIRST satellites. For a given survey depth, applying image denoising to the data both improves the accuracy of the shape measurements and increases the number density of galaxies with a measurable shape. We perform simple tests of three different denoising techniques, using synthetic data. We propose a new and simple denoising method, based on wavelet decomposition of the data and a Wiener filtering of the resulting wavelet coefficients. When applied to the GREAT08 challenge dataset, this technique allows us to improve the quality factor of the measurement (Q; GREAT08 definition), by up to a factor of two. We demonstrate that the typical pixel size of the EUCLID optical channel will allow us to use image denoising.Comment: Accepted for publication in A&A. 8 pages, 5 figure

    Classification of human parasitic worm using microscopic image processing technique

    Get PDF
    Human parasitic infection causes diseases to people whether this infection will be inside the body called endoparasites, or outside of the body called ectoparasites. Human intestinal parasite worms infected by air, food, and water are the causes of major diseases and health problems. So in this study, a technique to identify two types of parasites in human fecal, that is, the eggs of the worms is proposed. In this strategy, digital image processing methods such as noise reduction, contrast enhancement, and other morphological process are applied to extract the eggs images based on their features. The technique suggested in this study enables us to classify two different parasite eggs from their microscopic images which are roundworms (Ascaris lumbricoides ova, ALO) and whipworms (Trichuris trichiura ova, TTO). This proposed recognition method includes three stages. The first stage is a pre-processing sub-system, which is used to obtain unique features after performing noise reduction, contrast enhancement, edge enhancement, and detection. The next stage is an extraction mechanism which is based on five features of the three characteristics (shape, shell smoothness, and size. The final stage, the Filtration with Determinations Thresholds System (F-DTS) classifier is used to recognize the process using the ranges of feature values as a database to identify and classify the two types of parasites. The overall success rates are 93% and 94% in Ascaris lumbricoides and Trichuris trichiura, respectively

    Lorentzian Iterative Hard Thresholding: Robust Compressed Sensing with Prior Information

    Full text link
    Commonly employed reconstruction algorithms in compressed sensing (CS) use the L2L_2 norm as the metric for the residual error. However, it is well-known that least squares (LS) based estimators are highly sensitive to outliers present in the measurement vector leading to a poor performance when the noise no longer follows the Gaussian assumption but, instead, is better characterized by heavier-than-Gaussian tailed distributions. In this paper, we propose a robust iterative hard Thresholding (IHT) algorithm for reconstructing sparse signals in the presence of impulsive noise. To address this problem, we use a Lorentzian cost function instead of the L2L_2 cost function employed by the traditional IHT algorithm. We also modify the algorithm to incorporate prior signal information in the recovery process. Specifically, we study the case of CS with partially known support. The proposed algorithm is a fast method with computational load comparable to the LS based IHT, whilst having the advantage of robustness against heavy-tailed impulsive noise. Sufficient conditions for stability are studied and a reconstruction error bound is derived. We also derive sufficient conditions for stable sparse signal recovery with partially known support. Theoretical analysis shows that including prior support information relaxes the conditions for successful reconstruction. Simulation results demonstrate that the Lorentzian-based IHT algorithm significantly outperform commonly employed sparse reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments. Numerical results also demonstrate that the partially known support inclusion improves the performance of the proposed algorithm, thereby requiring fewer samples to yield an approximate reconstruction.Comment: 28 pages, 9 figures, accepted in IEEE Transactions on Signal Processin

    An efficient, approximate path-following algorithm for elastic net based nonlinear spike enhancement

    Get PDF
    Unwanted spike noise in a digital signal is a common problem in digital filtering. However, sometimes the spikes are wanted and other, superimposed, signals are unwanted, and linear, time invariant (LTI) filtering is ineffective because the spikes are wideband - overlapping with independent noise in the frequency domain. So, no LTI filter can separate them, necessitating nonlinear filtering. However, there are applications in which the noise includes drift or smooth signals for which LTI filters are ideal. We describe a nonlinear filter formulated as the solution to an elastic net regularization problem, which attenuates band-limited signals and independent noise, while enhancing superimposed spikes. Making use of known analytic solutions a novel, approximate path-following algorithm is given that provides a good, filtered output with reduced computational effort by comparison to standard convex optimization methods. Accurate performance is shown on real, noisy electrophysiological recordings of neural spikes

    Estimation from quantized Gaussian measurements: when and how to use dither

    Full text link
    Subtractive dither is a powerful method for removing the signal dependence of quantization noise for coarsely quantized signals. However, estimation from dithered measurements often naively applies the sample mean or midrange, even when the total noise is not well described with a Gaussian or uniform distribution. We show that the generalized Gaussian distribution approximately describes subtractively dithered, quantized samples of a Gaussian signal. Furthermore, a generalized Gaussian fit leads to simple estimators based on order statistics that match the performance of more complicated maximum likelihood estimators requiring iterative solvers. The order statistics-based estimators outperform both the sample mean and midrange for nontrivial sums of Gaussian and uniform noise. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. Specifically, we find subtractive dither to be beneficial when the ratio between the Gaussian standard deviation and quantization interval length is roughly less than one-third. When that ratio is also greater than 0.822/K^0.930 for the number of measurements K > 20, estimators we present are more efficient than the midrange.https://arxiv.org/abs/1811.06856Accepted manuscrip
    • …
    corecore