808 research outputs found

    An Algorithm on Generalized Un Sharp Masking for Sharpness and Contrast of an Exploratory Data Model

    Full text link
    In the applications like medical radiography enhancing movie features and observing the planets it is necessary to enhance the contrast and sharpness of an image. The model proposes a generalized unsharp masking algorithm using the exploratory data model as a unified framework. The proposed algorithm is designed as to solve simultaneously enhancing contrast and sharpness by means of individual treatment of the model component and the residual, reducing the halo effect by means of an edge-preserving filter, solving the out of range problem by means of log ratio and tangent operations. Here is a new system called the tangent system which is based upon a specific bargeman divergence. Experimental results show that the proposed algorithm is able to significantly improve the contrast and sharpness of an image. Using this algorithm user can adjust the two parameters the contrast and sharpness to have desired output

    Implementation of Adaptive Unsharp Masking as a pre-filtering method for watermark detection and extraction

    Get PDF
    Digital watermarking has been one of the focal points of research interests in order to provide multimedia security in the last decade. Watermark data, belonging to the user, are embedded on an original work such as text, audio, image, and video and thus, product ownership can be proved. Various robust watermarking algorithms have been developed in order to extract/detect the watermark against such attacks. Although watermarking algorithms in the transform domain differ from others by different combinations of transform techniques, it is difficult to decide on an algorithm for a specific application. Therefore, instead of developing a new watermarking algorithm with different combinations of transform techniques, we propose a novel and effective watermark extraction and detection method by pre-filtering, namely Adaptive Unsharp Masking (AUM). In spite of the fact that Unsharp Masking (UM) based pre-filtering is used for watermark extraction/detection in the literature by causing the details of the watermarked image become more manifest, effectiveness of UM may decrease in some cases of attacks. In this study, AUM has been proposed for pre-filtering as a solution to the disadvantages of UM. Experimental results show that AUM performs better up to 11\% in objective quality metrics than that of the results when pre-filtering is not used. Moreover; AUM proposed for pre-filtering in the transform domain image watermarking is as effective as that of used in image enhancement and can be applied in an algorithm-independent way for pre-filtering in transform domain image watermarking

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    Técnica local basada en conjuntos difusos de tipo 2 para mejorar la imagen de manchas

    Get PDF
    The proposed approach in the paper comes under “Advanced Soft Computing Based Medical Image Processing Research” and the work has been conducted by Dr. Dibya Jyoti Bora (Assistant Professor), School of Computing Sciences, The Assam Kaziranga University, Jorhat, Assam in the year 2018-2019. Introduction: HE stain images, although considered as the golden standard for medical image diagnosis, are still found to suffer from poor contrast and degradation in color quality. In this paper, a Type-2 fuzzy set-based enhancement technique is proposed for HE stain image enhancement with special care towards color-based computations and measurements. Methods: This paper introduces a new approach based on Type-2 fuzzy set for HE stain image enhancement where Bicubic Interpolation plays an important part. Unsharp Masking is also employed as a post enhancement factor. Results: From the results, it is clearly visible that cell nuclei and other cell bodies are easily distinguishable from each other in the enhanced result produced by our proposed approach. It implies that vagueness in the edges surrounding the objects in the original image is removed to an acceptable level. Conclusions: The proposed approach is found to be, through both subjective and objective evaluations, an efficient preprocessing technique for a better HE stain image analysis. Originality: The ideas involved in this paper are original. If work by other researchers are mentioned in any part of the paper, then they are cited properly. Limitation: The relatively high time complexity is the only limitation associated with the proposed approach.El enfoque propuesto en el artículo se encuentra en el proyecto “Investigación avanzada de procesamiento de imágenes médicas basadas en computación suave”, el trabajo ha sido realizado por el doctor Dibya Jyoti Bora (profesor asistente), de la Facultad de Ciencias de la Computación, Universidad de Assam Kaziranga, Jorhat, Assam en el año 2018-2019. Introducción: las imágenes de tinción HE, aunque consideradas como el estándar ideal para el diagnóstico de imágenes médicas, aún sufren de poco contraste y degradación en la calidad del color. En este documento se propone una técnica de mejora basada en conjuntos difusos tipo 2 para optimizar la imagen de tinción HE con especial cuidado hacia los cálculos y mediciones basados en el color. Métodos: este documento presenta un nuevo enfoque basado en el conjunto difuso tipo 2 para mejorar laimagen de tinción HE, donde la interpolación bicúbica juega un papel importante. La máscara de desenfoque también se emplea como factor de mejora posterior. Resultados: a partir de los resultados es claramente visible que los núcleos celulares y otros cuerpos celulares son fácilmente distinguibles entre sí en el resultado mejorado producido por el enfoque propuesto. Esto implica que la vaguedad en los bordes que rodean los objetos en la imagen original se elimina a un nivel aceptable. Conclusiones: se encuentra que el enfoque es, a través de evaluaciones tanto subjetivas como objetivas, una técnica de preprocesamiento eficiente para un mejor análisis de imagen de tinción HE. Originalidad: las ideas involucradas en este documento son originales. Si el trabajo de otros investigadores se menciona en alguna parte del artículo se citan correctamente. Limitación: la complejidad de tiempo relativamente alta es la única limitación asociada con el enfoque propuesto

    Analysis of Hardware Accelerated Deep Learning and the Effects of Degradation on Performance

    Get PDF
    As convolutional neural networks become more prevalent in research and real world applications, the need for them to be faster and more robust will be a constant battle. This thesis investigates the effect of degradation being introduced to an image prior to object recognition with a convolutional neural network. As well as experimenting with methods to reduce the degradation and improve performance. Gaussian smoothing and additive Gaussian noise are both analyzed degradation models within this thesis and are reduced with Gaussian and Butterworth masks using unsharp masking and smoothing, respectively. The results show that each degradation is disruptive to the performance of YOLOv3, with Gaussian smoothing producing a mean average precision of less than 20% and Gaussian noise producing a mean average precision as low as 0%. Reduction methods applied to the data give results of 1%-21% mean average precision increase over the baseline, varying based on the degradation model. These methods are also applied to an 8-bit quantized implementation of YOLOv3, which is intended to run on a Xilinx ZCU104 FPGA, which showed to be as robust as the oating point network, with results within 2% mean average precision of the oating point network. With the ZCU104 being able to process images of 416x416 at 25 frames per second which is comparable to a NVIDIA 2080 RTX, FPGAs are a viable solution to computing object detection on the edge. In conclusion, this thesis shows that degradation causes performance of a convolutional neural network (quantized and oating point) to lose accuracy to a level that the network is unable to accurately predict objects. However, the degradation can be reduced, and in most cases can elevate the performance of the network by using computer vision techniques to reduce the noise within the image

    Comparing Adobe’s Unsharp Masks and High-Pass Filters in Photoshop Using the Visual Information Fidelity Metric

    Get PDF
    The present study examines image sharpening techniques quantitatively. A technique known as unsharp masking has been the preferred image sharpening technique for imaging professionals for many years. More recently, another professional-level sharpening solution has been introduced, namely, the high-pass filter technique of image sharpening. An extensive review of the literature revealed no purely quantitative studies that compared these techniques. The present research compares unsharp masking (USM) and high-pass filter (HPF) sharpening using an image quality metric known as Visual Information Fidelity (VIF). Prior researchers have used VIF data in research aimed at improving the USM sharpening technique. The present study aims to add to this branch of the literature through the comparison of the USM and the HPF sharpening techniques. The objective of the present research is to determine which sharpening technique, USM or HPF, yields the highest VIF scores for two categories of images, macro images and architectural images. Each set of images was further analyzed to compare the VIF scores of subjects with high and low severity depth of field defects. Finally, the researcher proposed rules for choosing USM and HPF parameters that resulted in optimal VIF scores. For each category, the researcher captured 24 images (12 with high severity defects and 12 with low severity defects). Each image was sharpened using an iterative process of choosing USM and HPF sharpening parameters, applying sharpening filters with the chosen parameters, and assessing the resulting images using the VIF metric. The process was repeated until the VIF scores could no longer be improved. The highest USM and HPF VIF scores for each image were compared using a paired t-test for statistical significance. The t-test results demonstrated that: • The USM VIF scores for macro images (M = 1.86, SD = 0.59) outperformed those for HPF (M = 1.34, SD = 0.18), a statistically significant mean increase of 0.52, t = 5.57 (23), p = 0.0000115. Similar results were obtained for both the high severity and low severity subsets of macro images. • The USM VIF scores for architectural images (M = 1.40, SD = 0.24) outperformed those for HPF (M = 1.26, SD = 0.15), a statistically significant mean increase of 0.14, t = 5.21 (23), p = 0.0000276. Similar results were obtained for both the high severity and low severity subsets of architectural images. The researcher found that the optimal sharpening parameters for USM and HPF depend on the content of the image. The optimal choice of parameters for USM depends on whether the most important features are edges or objects. Specific rules for choosing USM parameters were developed for each class of images. HPF is simpler in the fact that it only uses one parameter, Radius. Specific rules for choosing the HPF Radius were also developed for each class of images. Based on these results, the researcher concluded that USM outperformed HPF in sharpening macro and architectural images. The superior performance of USM could be due to the fact that it provides more parameters for users to control the sharpening process than HPF
    corecore