1,753 research outputs found

    Gradient-adaptive Nonlinear Sharpening for Dental Radiographs

    Get PDF
    Unsharp Masking is a popular image processing technique used for improving the sharpness of structures on dental radiographs. However, it produces overshoot artefact and intolerably amplifies noise. On radiographs, the overshoot artefact often resembles the indications of prosthesis misfit, pathosis, and pathological features associated with restorations. A noise- robust alternative to the Unsharp Masking algorithm, termed Gradient-adaptive Nonlinear Sharpening (GNS) which is free from overshoot and discontinuity artefacts, is proposed in this paper. In GNS, the product of the arbitrary scalar termed as ‘scale’ and the difference between the output of the Adaptive Edge Smoothing Filter (AESF) and the input image, weighted by the normalized gradient magnitude is added to the input image. AESF is a locally-adaptive 2D Gaussian smoothing kernel whose variance is directly proportional to the local value of the gradient magnitude. The dataset employed in this paper is downloaded from the Mendeley data repository having annotated panoramic dental radiographs of 116 patients. On 116 dental radiographs, the values of Saturation Evaluation Index (SEI), Sharpness of Ridges (SOR), Edge Model Based Contrast Metric (EMBCM), and Visual Information Fidelity (VIF) exhibited by the Unsharp Masking are 0.0048 ± 0.0021, 4.4 × 1013 ± 3.8 × 1013, 0.2634 ± 0.2732 and 0.9898 ± 0.0122. The values of these quality metrics corresponding to the GNS are 0.0042 ± 0.0017, 2.2 × 1013 ± 1.8 × 1013, 0.5224 ± 0.1825, and 1.0094 ± 0.0094. GNS exhibited lower values of SEI and SOR and higher values of EMBCM and VIF, compared to the Unsharp Masking. Lower values of SEI and SOR, respectively indicate that GNS is free from overshoot artefact and saturation and the quality of edges in the output images of GNS is less affected by noise. Higher values of EMBCM and VIF, respectively confirm that GNS is free from haloes as it produces thin and sharp edges and the sharpened images are of good information fidelity

    Image enhancement by non-linear extrapolation in frequency space

    Get PDF
    An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures

    An Algorithm on Generalized Un Sharp Masking for Sharpness and Contrast of an Exploratory Data Model

    Full text link
    In the applications like medical radiography enhancing movie features and observing the planets it is necessary to enhance the contrast and sharpness of an image. The model proposes a generalized unsharp masking algorithm using the exploratory data model as a unified framework. The proposed algorithm is designed as to solve simultaneously enhancing contrast and sharpness by means of individual treatment of the model component and the residual, reducing the halo effect by means of an edge-preserving filter, solving the out of range problem by means of log ratio and tangent operations. Here is a new system called the tangent system which is based upon a specific bargeman divergence. Experimental results show that the proposed algorithm is able to significantly improve the contrast and sharpness of an image. Using this algorithm user can adjust the two parameters the contrast and sharpness to have desired output

    Development of Some Spatial-domain Preprocessing and Post-processing Algorithms for Better 2-D Up-scaling

    Get PDF
    Image super-resolution is an area of great interest in recent years and is extensively used in applications like video streaming, multimedia, internet technologies, consumer electronics, display and printing industries. Image super-resolution is a process of increasing the resolution of a given image without losing its integrity. Its most common application is to provide better visual effect after resizing a digital image for display or printing. One of the methods of improving the image resolution is through the employment of a 2-D interpolation. An up-scaled image should retain all the image details with very less degree of blurring meant for better visual quality. In literature, many efficient 2-D interpolation schemes are found that well preserve the image details in the up-scaled images; particularly at the regions with edges and fine details. Nevertheless, these existing interpolation schemes too give blurring effect in the up-scaled images due to the high frequency (HF) degradation during the up-sampling process. Hence, there is a scope to further improve their performance through the incorporation of various spatial domain pre-processing, post-processing and composite algorithms. Therefore, it is felt that there is sufficient scope to develop various efficient but simple pre-processing, post-processing and composite schemes to effectively restore the HF contents in the up-scaled images for various online and off-line applications. An efficient and widely used Lanczos-3 interpolation is taken for further performance improvement through the incorporation of various proposed algorithms. The various pre-processing algorithms developed in this thesis are summarized here. The term pre-processing refers to processing the low-resolution input image prior to image up-scaling. The various pre-processing algorithms proposed in this thesis are: Laplacian of Laplacian based global pre-processing (LLGP) scheme; Hybrid global pre-processing (HGP); Iterative Laplacian of Laplacian based global pre-processing (ILLGP); Unsharp masking based pre-processing (UMP); Iterative unsharp masking (IUM); Error based up-sampling(EU) scheme. The proposed algorithms: LLGP, HGP and ILLGP are three spatial domain preprocessing algorithms which are based on 4th, 6th and 8th order derivatives to alleviate nonuniform blurring in up-scaled images. These algorithms are used to obtain the high frequency (HF) extracts from an image by employing higher order derivatives and perform precise sharpening on a low resolution image to alleviate the blurring in its 2-D up-sampled counterpart. In case of unsharp masking based pre-processing (UMP) scheme, the blurred version of a low resolution image is used for HF extraction from the original version through image subtraction. The weighted version of the HF extracts are superimposed with the original image to produce a sharpened image prior to image up-scaling to counter blurring effectively. IUM makes use of many iterations to generate an unsharp mask which contains very high frequency (VHF) components. The VHF extract is the result of signal decomposition in terms of sub-bands using the concept of analysis filter bank. Since the degradation of VHF components is maximum, restoration of such components would produce much better restoration performance. EU is another pre-processing scheme in which the HF degradation due to image upscaling is extracted and is called prediction error. The prediction error contains the lost high frequency components. When this error is superimposed on the low resolution image prior to image up-sampling, blurring is considerably reduced in the up-scaled images. Various post-processing algorithms developed in this thesis are summarized in following. The term post-processing refers to processing the high resolution up-scaled image. The various post-processing algorithms proposed in this thesis are: Local adaptive Laplacian (LAL); Fuzzy weighted Laplacian (FWL); Legendre functional link artificial neural network(LFLANN). LAL is a non-fuzzy, local based scheme. The local regions of an up-scaled image with high variance are sharpened more than the region with moderate or low variance by employing a local adaptive Laplacian kernel. The weights of the LAL kernel are varied as per the normalized local variance so as to provide more degree of HF enhancement to high variance regions than the low variance counterpart to effectively counter the non-uniform blurring. Furthermore, FWL post-processing scheme with a higher degree of non-linearity is proposed to further improve the performance of LAL. FWL, being a fuzzy based mapping scheme, is highly nonlinear to resolve the blurring problem more effectively than LAL which employs a linear mapping. Another LFLANN based post-processing scheme is proposed here to minimize the cost function so as to reduce the blurring in a 2-D up-scaled image. Legendre polynomials are used for functional expansion of the input pattern-vector and provide high degree of nonlinearity. Therefore, the requirement of multiple layers can be replaced by single layer LFLANN architecture so as to reduce the cost function effectively for better restoration performance. With single layer architecture, it has reduced the computational complexity and hence is suitable for various real-time applications. There is a scope of further improvement of the stand-alone pre-processing and postprocessing schemes by combining them through composite schemes. Here, two spatial domain composite schemes, CS-I and CS-II are proposed to tackle non-uniform blurring in an up-scaled image. CS-I is developed by combining global iterative Laplacian (GIL) preprocessing scheme with LAL post-processing scheme. Another highly nonlinear composite scheme, CS-II is proposed which combines ILLGP scheme with a fuzzy weighted Laplacian post-processing scheme for more improved performance than the stand-alone schemes. Finally, it is observed that the proposed algorithms: ILLGP, IUM, FWL, LFLANN and CS-II are better algorithms in their respective categories for effectively reducing blurring in the up-scaled images

    A Hybrid Segmentation and D-bar Method for Electrical Impedance Tomography

    Get PDF
    The Regularized D-bar method for Electrical Impedance Tomography provides a rigorous mathematical approach for solving the full nonlinear inverse problem directly, i.e. without iterations. It is based on a low-pass filtering in the (nonlinear) frequency domain. However, the resulting D-bar reconstructions are inherently smoothed leading to a loss of edge distinction. In this paper, a novel approach that combines the rigor of the D-bar approach with the edge-preserving nature of Total Variation regularization is presented. The method also includes a data-driven contrast adjustment technique guided by the key functions (CGO solutions) of the D-bar method. The new TV-Enhanced D-bar Method produces reconstructions with sharper edges and improved contrast while still solving the full nonlinear problem. This is achieved by using the TV-induced edges to increase the truncation radius of the scattering data in the nonlinear frequency domain thereby increasing the radius of the low pass filter. The algorithm is tested on numerically simulated noisy EIT data and demonstrates significant improvements in edge preservation and contrast which can be highly valuable for absolute EIT imaging

    Comparing Adobe’s Unsharp Masks and High-Pass Filters in Photoshop Using the Visual Information Fidelity Metric

    Get PDF
    The present study examines image sharpening techniques quantitatively. A technique known as unsharp masking has been the preferred image sharpening technique for imaging professionals for many years. More recently, another professional-level sharpening solution has been introduced, namely, the high-pass filter technique of image sharpening. An extensive review of the literature revealed no purely quantitative studies that compared these techniques. The present research compares unsharp masking (USM) and high-pass filter (HPF) sharpening using an image quality metric known as Visual Information Fidelity (VIF). Prior researchers have used VIF data in research aimed at improving the USM sharpening technique. The present study aims to add to this branch of the literature through the comparison of the USM and the HPF sharpening techniques. The objective of the present research is to determine which sharpening technique, USM or HPF, yields the highest VIF scores for two categories of images, macro images and architectural images. Each set of images was further analyzed to compare the VIF scores of subjects with high and low severity depth of field defects. Finally, the researcher proposed rules for choosing USM and HPF parameters that resulted in optimal VIF scores. For each category, the researcher captured 24 images (12 with high severity defects and 12 with low severity defects). Each image was sharpened using an iterative process of choosing USM and HPF sharpening parameters, applying sharpening filters with the chosen parameters, and assessing the resulting images using the VIF metric. The process was repeated until the VIF scores could no longer be improved. The highest USM and HPF VIF scores for each image were compared using a paired t-test for statistical significance. The t-test results demonstrated that: • The USM VIF scores for macro images (M = 1.86, SD = 0.59) outperformed those for HPF (M = 1.34, SD = 0.18), a statistically significant mean increase of 0.52, t = 5.57 (23), p = 0.0000115. Similar results were obtained for both the high severity and low severity subsets of macro images. • The USM VIF scores for architectural images (M = 1.40, SD = 0.24) outperformed those for HPF (M = 1.26, SD = 0.15), a statistically significant mean increase of 0.14, t = 5.21 (23), p = 0.0000276. Similar results were obtained for both the high severity and low severity subsets of architectural images. The researcher found that the optimal sharpening parameters for USM and HPF depend on the content of the image. The optimal choice of parameters for USM depends on whether the most important features are edges or objects. Specific rules for choosing USM parameters were developed for each class of images. HPF is simpler in the fact that it only uses one parameter, Radius. Specific rules for choosing the HPF Radius were also developed for each class of images. Based on these results, the researcher concluded that USM outperformed HPF in sharpening macro and architectural images. The superior performance of USM could be due to the fact that it provides more parameters for users to control the sharpening process than HPF

    3D sunken relief generation from a single image by feature line enhancement

    Get PDF
    Sunken relief is an art form whereby the depicted shapes are sunk into a given flat plane with a shallow overall depth. In this paper, we propose an efficient sunken relief generation algorithm based on a single image by the technique of feature line enhancement. Our method starts from a single image. First, we smoothen the image with morphological operations such as opening and closing operations and extract the feature lines by comparing the values of adjacent pixels. Then we apply unsharp masking to sharpen the feature lines. After that, we enhance and smoothen the local information to obtain an image with less burrs and jaggies. Differential operations are applied to produce the perceptive relief-like images. Finally, we construct the sunken relief surface by triangularization which transforms two-dimensional information into a three-dimensional model. The experimental results demonstrate that our method is simple and efficient
    corecore