3,098 research outputs found

    Piecewise Linear Model-Based Image Enhancement

    Get PDF
    A novel technique for the sharpening of noisy images is presented. The proposed enhancement system adopts a simple piecewise linear (PWL) function in order to sharpen the image edges and to reduce the noise. Such effects can easily be controlled by varying two parameters only. The noise sensitivity of the operator is further decreased by means of an additional filtering step, which resorts to a nonlinear model too. Results of computer simulations show that the proposed sharpening system is simple and effective. The application of the method to contrast enhancement of color images is also discussed

    Medical image enhancement using threshold decomposition driven adaptive morphological filter

    Get PDF
    One of the most common degradations in medical images is their poor contrast quality. This suggests the use of contrast enhancement methods as an attempt to modify the intensity distribution of the image. In this paper, a new edge detected morphological filter is proposed to sharpen digital medical images. This is done by detecting the positions of the edges and then applying a class of morphological filtering. Motivated by the success of threshold decomposition, gradientbased operators are used to detect the locations of the edges. A morphological filter is used to sharpen these detected edges. Experimental results demonstrate that the detected edge deblurring filter improved the visibility and perceptibility of various embedded structures in digital medical images. Moreover, the performance of the proposed filter is superior to that of other sharpener-type filters

    A Data-Driven Edge-Preserving D-bar Method for Electrical Impedance Tomography

    Full text link
    In Electrical Impedance Tomography (EIT), the internal conductivity of a body is recovered via current and voltage measurements taken at its surface. The reconstruction task is a highly ill-posed nonlinear inverse problem, which is very sensitive to noise, and requires the use of regularized solution methods, of which D-bar is the only proven method. The resulting EIT images have low spatial resolution due to smoothing caused by low-pass filtered regularization. In many applications, such as medical imaging, it is known \emph{a priori} that the target contains sharp features such as organ boundaries, as well as approximate ranges for realistic conductivity values. In this paper, we use this information in a new edge-preserving EIT algorithm, based on the original D-bar method coupled with a deblurring flow stopped at a minimal data discrepancy. The method makes heavy use of a novel data fidelity term based on the so-called {\em CGO sinogram}. This nonlinear data step provides superior robustness over traditional EIT data formats such as current-to-voltage matrices or Dirichlet-to-Neumann operators, for commonly used current patterns.Comment: 24 pages, 11 figure

    Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution

    Get PDF
    We propose two strategies to improve the quality of tractography results computed from diffusion weighted magnetic resonance imaging (DW-MRI) data. Both methods are based on the same PDE framework, defined in the coupled space of positions and orientations, associated with a stochastic process describing the enhancement of elongated structures while preserving crossing structures. In the first method we use the enhancement PDE for contextual regularization of a fiber orientation distribution (FOD) that is obtained on individual voxels from high angular resolution diffusion imaging (HARDI) data via constrained spherical deconvolution (CSD). Thereby we improve the FOD as input for subsequent tractography. Secondly, we introduce the fiber to bundle coherence (FBC), a measure for quantification of fiber alignment. The FBC is computed from a tractography result using the same PDE framework and provides a criterion for removing the spurious fibers. We validate the proposed combination of CSD and enhancement on phantom data and on human data, acquired with different scanning protocols. On the phantom data we find that PDE enhancements improve both local metrics and global metrics of tractography results, compared to CSD without enhancements. On the human data we show that the enhancements allow for a better reconstruction of crossing fiber bundles and they reduce the variability of the tractography output with respect to the acquisition parameters. Finally, we show that both the enhancement of the FODs and the use of the FBC measure on the tractography improve the stability with respect to different stochastic realizations of probabilistic tractography. This is shown in a clinical application: the reconstruction of the optic radiation for epilepsy surgery planning

    An Algorithm on Generalized Un Sharp Masking for Sharpness and Contrast of an Exploratory Data Model

    Full text link
    In the applications like medical radiography enhancing movie features and observing the planets it is necessary to enhance the contrast and sharpness of an image. The model proposes a generalized unsharp masking algorithm using the exploratory data model as a unified framework. The proposed algorithm is designed as to solve simultaneously enhancing contrast and sharpness by means of individual treatment of the model component and the residual, reducing the halo effect by means of an edge-preserving filter, solving the out of range problem by means of log ratio and tangent operations. Here is a new system called the tangent system which is based upon a specific bargeman divergence. Experimental results show that the proposed algorithm is able to significantly improve the contrast and sharpness of an image. Using this algorithm user can adjust the two parameters the contrast and sharpness to have desired output

    Image enhancement via adaptive unsharp masking

    Get PDF
    Journal ArticleAbstract-This paper presents a new method for unsharp masking for contrast enhancement of images. Our approach employs an adaptive filter that controls the contribution of the sharpening path in such a way that contrast enhancement occurs in high detail areas and little or no image sharpening occurs in smooth areas

    Graph-based methods for simultaneous smoothing and sharpening of color images

    Full text link
    [EN] In this work we introduce an image characterization of pixels based on local graphs that allows to distinguish different local regions around a pixel. This separation also permits us to develop a method for determining the role of each pixel in a neighborhood of any other, either for smoothing or for sharpening. Two methods for simultaneously conducting both processes are provided. Our solution overcomes the drawbacks of the classic two steps sequential smoothing and sharpening process: enhancing details while reducing noise and not losing critical information. The parameters of the methods are adjusted in two different ways: through observers visual quality optimization and with an objective optimization criterion. The results show that our methods outperform other recent state-of-the-art ones.We thank F. Russo for providing the implementation of the Fuzzy method and V. Ratmer, and Y.Y. Zeevi for providing the implementation of the FAB method. Cristina Jordan acknowledges the support of grant TEC2016-79884-C2-2-R. Samuel Morillas acknowledges the support of grant MTM2015-64373-P (MINECO/FEDER, Spain, UE).PĂ©rez-Benito, C.; Jordan-Lluch, C.; Conejero, JA.; Morillas, S. (2019). Graph-based methods for simultaneous smoothing and sharpening of color images. Journal of Computational and Applied Mathematics. 350:380-395. https://doi.org/10.1016/j.cam.2018.10.031S38039535
    • …
    corecore