8,333 research outputs found

    Optimum non linear binary image restoration through linear grey-scale operations

    Get PDF
    Non-linear image processing operators give excellent results in a number of image processing tasks such as restoration and object recognition. However they are frequently excluded from use in solutions because the system designer does not wish to introduce additional hardware or algorithms and because their design can appear to be ad hoc. In practice the median filter is often used though it is rarely optimal. This paper explains how various non-linear image processing operators may be implemented on a basic linear image processing system using only convolution and thresholding operations. The paper is aimed at image processing system developers wishing to include some non-linear processing operators without introducing additional system capabilities such as extra hardware components or software toolboxes. It may also be of benefit to the interested reader wishing to learn more about non-linear operators and alternative methods of design and implementation. The non-linear tools include various components of mathematical morphology, median and weighted median operators and various order statistic filters. As well as describing novel algorithms for implementation within a linear system the paper also explains how the optimum filter parameters may be estimated for a given image processing task. This novel approach is based on the weight monotonic property and is a direct rather than iterated method

    A proximal iteration for deconvolving Poisson noisy images using sparse representations

    Get PDF
    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are: First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a {\it non-linear} degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a non-smooth sparsity-promoting penalties over the image representation coefficients (e.g. â„“1\ell_1-norm). Third, a fast iterative backward-forward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy

    Electrocardiogram Baseline Wander Suppression Based on the Combination of Morphological and Wavelet Transformation Based Filtering

    Get PDF
    One of the major noise components in electrocardiogram (ECG) is the baseline wander (BW). Effective methods for suppressing BW include the wavelet-based (WT) and the mathematical morphological filtering-based (MMF)algorithms. However, the T waveform distortions introduced by the WTand the rectangular/trapezoidal distortions introduced by MMF degrade the quality of the output signal. Hence, in this study, we introduce a method by combining the MMF and WTto overcome the shortcomings of both existing methods. To demonstrate the effectiveness of the proposed method, artificial ECG signals containing a clinicalBW are used for numerical simulation, and we also create a realistic model of baseline wander to compare the proposed method with other state-of-the-art methods commonly used in the literature. /e results show that the BW suppression effect of the proposed method is better than that of the others. Also, the new method is capable of preserving the outline of the BW and avoiding waveform distortions caused by the morphology filter, thereby obtaining an enhanced quality of ECG

    Multiresolution signal decomposition schemes. Part 2: Morphological wavelets

    Get PDF
    In its original form, the wavelet transform is a linear tool. However, it has been increasingly recognized that nonlinear extensions are possible. A major impulse to the development of nonlinear wavelet transforms has been given by the introduction of the lifting scheme by Sweldens. The aim of this report, which is a sequel to a previous report devoted exclusively to the pyramid transform, is to present an axiomatic framework encompassing most existing linear and nonlinear wavelet decompositions. Furthermore, it introduces some, thus far unknown, wavelets based on mathematical morphology, such as the morphological Haar wavelet, both in one and two dimensions. A general and flexible approach for the construction of nonlinear (morphological) wavelets is provided by the lifting scheme. This paper discusses one example in considerable detail, the max-lifting scheme, which has the intriguing property that it preserves local maxima in a signal over a range of scales, depending on how local or global these maxima are

    Soft morphological filter optimization using a genetic algorithm for noise elimination

    Get PDF
    Digital image quality is of importance in almost all image processing applications. Many different approaches have been proposed for restoring the image quality depending on the nature of the degradation. One of the most common problems that cause such degradation is impulse noise. In general, well known median filters are preferred for eliminating different types of noise. Soft morphological filters are recently introduced and have been in use for many purposes. In this study, we present a Genetic Algorithm (GA) which combines different objectives as a weighted sum under a single evaluation function and generates a soft morphological filter to deal with impulse noise, after a training process with small images. The automatically generated filter performs better than the median filter and achieves comparable results to the best known filters from the literature over a set of benchmark instances that are larger than the training instances. Moreover, although the training process involves only impulse noise added images, the same evolved filter performs better than the median filter for eliminating Gaussian noise as well

    Detection of dirt impairments from archived film sequences : survey and evaluations

    Get PDF
    Film dirt is the most commonly encountered artifact in archive restoration applications. Since dirt usually appears as a temporally impulsive event, motion-compensated interframe processing is widely applied for its detection. However, motion-compensated prediction requires a high degree of complexity and can be unreliable when motion estimation fails. Consequently, many techniques using spatial or spatiotemporal filtering without motion were also been proposed as alternatives. A comprehensive survey and evaluation of existing methods is presented, in which both qualitative and quantitative performances are compared in terms of accuracy, robustness, and complexity. After analyzing these algorithms and identifying their limitations, we conclude with guidance in choosing from these algorithms and promising directions for future research

    Convexity in source separation: Models, geometry, and algorithms

    Get PDF
    Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work
    • …
    corecore