5,019 research outputs found

    Scale-space and edge detection using anisotropic diffusion

    Get PDF
    The scale-space technique introduced by Witkin involves generating coarser resolution images by convolving the original image with a Gaussian kernel. This approach has a major drawback: it is difficult to obtain accurately the locations of the “semantically meaningful” edges at coarse scales. In this paper we suggest a new definition of scale-space, and introduce a class of algorithms that realize it using a diffusion process. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing in preference to interregion smoothing. It is shown that the “no new maxima should be generated at coarse scales” property of conventional scale space is preserved. As the region boundaries in our approach remain sharp, we obtain a high quality edge detector which successfully exploits global information. Experimental results are shown on a number of images. The algorithm involves elementary, local operations replicated over the image making parallel hardware implementations feasible

    Enhancement and Restoration of Microscopic Images Corrupted with Poisson's Noise Using a Nonlinear Partial Differential Equation-based Filter

    Get PDF
    An inherent characteristic of the many imaging modalities such as fluorescence microscopy and other microscopic modalities is the presence of intrinsic Poisson noise that may lead to degradation of the captured image during its formation. A nonlinear complex diffusion-based filter adapted to Poisson noise is proposed in this paper to restore and enhance the degraded microscopic images captured by imaging devices having photon limited light detectors. The proposed filter is based on a maximum a posterior approach to the image reconstruction problem. The formulation of the filtering problem as maximisation of a posterior is useful because it allows one to incorporate the Poisson likelihood term as a data attachment which can be added to an image prior model. Here, the Gibb's image prior model-based on energy functional defined in terms of gradient norm of the image is used. The performance of the proposed scheme has been compared with other standard techniques available in literature such as Wiener filter, regularised filter, Lucy-Richardson filter and another proposed nonlinear anisotropic diffusion-based filter in terms of mean square error, peak signal-to-noise ratio, correlation parameter and mean structure similarity index map.The results shows that the proposed complex diffusion-based filter adapted to Poisson noise performs better in comparison to other filters and is better choice for reduction of intrinsic Poisson noise from the digital microscopic images and it is also well capable of preserving edges and radiometric information such as luminance and contrast of the restored image.Defence Science Journal, 2011, 61(5), pp.452-461, DOI:http://dx.doi.org/10.14429/dsj.61.118

    A Framework for Directional and Higher-Order Reconstruction in Photoacoustic Tomography

    Get PDF
    Photoacoustic tomography is a hybrid imaging technique that combines high optical tissue contrast with high ultrasound resolution. Direct reconstruction methods such as filtered backprojection, time reversal and least squares suffer from curved line artefacts and blurring, especially in case of limited angles or strong noise. In recent years, there has been great interest in regularised iterative methods. These methods employ prior knowledge on the image to provide higher quality reconstructions. However, easy comparisons between regularisers and their properties are limited, since many tomography implementations heavily rely on the specific regulariser chosen. To overcome this bottleneck, we present a modular reconstruction framework for photoacoustic tomography. It enables easy comparisons between regularisers with different properties, e.g. nonlinear, higher-order or directional. We solve the underlying minimisation problem with an efficient first-order primal-dual algorithm. Convergence rates are optimised by choosing an operator dependent preconditioning strategy. Our reconstruction methods are tested on challenging 2D synthetic and experimental data sets. They outperform direct reconstruction approaches for strong noise levels and limited angle measurements, offering immediate benefits in terms of acquisition time and quality. This work provides a basic platform for the investigation of future advanced regularisation methods in photoacoustic tomography.Comment: submitted to "Physics in Medicine and Biology". Changes from v1 to v2: regularisation with directional wavelet has been added; new experimental tests have been include

    Robust Feature Detection and Local Classification for Surfaces Based on Moment Analysis

    Get PDF
    The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. Here, a local classification method on surfaces is presented which avoids the evaluation of discretized curvature quantities. Moreover, it provides an indicator for smoothness of a given discrete surface and comes together with a built-in multiscale. The proposed classification tool is based on local zero and first moments on the discrete surface. The corresponding integral quantities are stable to compute and they give less noisy results compared to discrete curvature quantities. The stencil width for the integration of the moments turns out to be the scale parameter. Prospective surface processing applications are the segmentation on surfaces, surface comparison, and matching and surface modeling. Here, a method for feature preserving fairing of surfaces is discussed to underline the applicability of the presented approach.

    Multiscale Astronomical Image Processing Based on Nonlinear Partial Differential Equations

    Get PDF
    Astronomical applications of recent advances in the field of nonastronomical image processing are presented. These innovative methods, applied to multiscale astronomical images, increase signal-to-noise ratio, do not smear point sources or extended diffuse structures, and are thus a highly useful preliminary step for detection of different features including point sources, smoothing of clumpy data, and removal of contaminants from background maps. We show how the new methods, combined with other algorithms of image processing, unveil fine diffuse structures while at the same time enhance detection of localized objects, thus facilitating interactive morphology studies and paving the way for the automated recognition and classification of different features. We have also developed a new application framework for astronomical image processing that implements some recent advances made in computer vision and modern image processing, along with original algorithms based on nonlinear partial differential equations. The framework enables the user to easily set up and customize an image-processing pipeline interactively; it has various common and new visualization features and provides access to many astronomy data archives. Altogether, the results presented here demonstrate the first implementation of a novel synergistic approach based on integration of image processing, image visualization, and image quality assessment

    A new Edge Detector Based on Parametric Surface Model: Regression Surface Descriptor

    Full text link
    In this paper we present a new methodology for edge detection in digital images. The first originality of the proposed method is to consider image content as a parametric surface. Then, an original parametric local model of this surface representing image content is proposed. The few parameters involved in the proposed model are shown to be very sensitive to discontinuities in surface which correspond to edges in image content. This naturally leads to the design of an efficient edge detector. Moreover, a thorough analysis of the proposed model also allows us to explain how these parameters can be used to obtain edge descriptors such as orientations and curvatures. In practice, the proposed methodology offers two main advantages. First, it has high customization possibilities in order to be adjusted to a wide range of different problems, from coarse to fine scale edge detection. Second, it is very robust to blurring process and additive noise. Numerical results are presented to emphasis these properties and to confirm efficiency of the proposed method through a comparative study with other edge detectors.Comment: 21 pages, 13 figures and 2 table

    Development and evaluation of perceptually adapted colour gradients

    Get PDF
    In this study a set of colour gradients based on colour visual perception, which use International Commission on Illumination (CIE) L*a*b* colour space, is presented. The main objective is the study of how the colour difference equations, developed by CIE, affect the estimation of the gradients in terms of correlation with colour visual perception. To evaluate the gradients performance they are used as the basis of an edge detector based on levelset. A set of synthetic images was designed to evaluate which edge detector and consequently, which colour difference equation, is more correlated with human perception of colour. Both quantitative and qualitative measurements showed that the results obtained using CIE94 have a higher correlation with what the human eye can perceive

    An adaptive noise removal approach for restoration of digital images corrupted by multimodal noise

    Get PDF
    Data smoothing algorithms are commonly applied to reduce the level of noise and eliminate the weak textures contained in digital images. Anisotropic diffusion algorithms form a distinct category of noise removal approaches that implement the smoothing process locally in agreement with image features such as edges that are typically determined by applying diverse partial differential equation (PDE) models. While this approach is opportune since it allows the implementation of feature-preserving data smoothing strategies, the inclusion of the PDE models in the formulation of the data smoothing process compromises the performance of the anisotropic diffusion schemes when applied to data corrupted by non-Gaussian and multimodal image noise. In this paper we first evaluate the positive aspects related to the inclusion of a multi-scale edge detector based on the generalisation of the Di Zenzo operator into the formulation of the anisotropic diffusion process. Then, we introduce a new approach that embeds the vector median filtering into the discrete implementation of the anisotropic diffusion in order to improve the performance of the noise removal algorithm when applied to multimodal noise suppression. To evaluate the performance of the proposed data smoothing strategy, a large number of experiments on various types of digital images corrupted by multimodal noise were conducted.Keywords — Anisotropic diffusion, vector median filtering, feature preservation, multimodal noise, noise removal
    corecore