4,981 research outputs found

    Evaluating spatial and frequency domain enhancement techniques on dental images to assist dental implant therapy

    Get PDF
    Dental imaging provides the patient's anatomical details for the dental implant based on the maxillofacial structure and the two-dimensional geometric projection, helping clinical experts decide whether the implant surgery is suitable for a particular patient. Dental images often suffer from problems associated with random noise and low contrast factors, which need effective preprocessing operations. However, each enhancement technique comes with some advantages and limitations. Therefore, choosing a suitable image enhancement method always a difficult task. In this paper, a universal framework is proposed that integrates the functionality of various enhancement mechanisms so that dentists can select a suitable method of their own choice to improve the quality of dental image for the implant procedure. The proposed framework evaluates the effectiveness of both frequency domain enhancement and spatial domain enhancement techniques on dental images. The selection of the best enhancement method further depends on the output image perceptibility responses, peak signal-to-noise ratio (PSNR), and sharpness. The proposed framework offers a flexible and scalable approach to the dental expert to perform enhancement of a dental image according to visual image features and different enhancement requirements

    An Algorithm on Generalized Un Sharp Masking for Sharpness and Contrast of an Exploratory Data Model

    Full text link
    In the applications like medical radiography enhancing movie features and observing the planets it is necessary to enhance the contrast and sharpness of an image. The model proposes a generalized unsharp masking algorithm using the exploratory data model as a unified framework. The proposed algorithm is designed as to solve simultaneously enhancing contrast and sharpness by means of individual treatment of the model component and the residual, reducing the halo effect by means of an edge-preserving filter, solving the out of range problem by means of log ratio and tangent operations. Here is a new system called the tangent system which is based upon a specific bargeman divergence. Experimental results show that the proposed algorithm is able to significantly improve the contrast and sharpness of an image. Using this algorithm user can adjust the two parameters the contrast and sharpness to have desired output

    A Paradigm for color gamut mapping of pictorial images

    Get PDF
    In this thesis, a paradigm was generated for color gamut mapping of pictorial images. This involved the development and testing of: 1.) a hue-corrected version of the CIELAB color space, 2.) an image-dependent sigmoidal-lightness-rescaling process, 3.) an image-gamut- based chromatic-compression process, and 4.) a gamut-expansion process. This gamut-mapping paradigm was tested against some gamut-mapping strategies published in the literature. Reproductions generated by gamut mapping in a hue-corrected CIELAB color space more accurately preserved the perceived hue of the original scenes compared to reproductions generated using the CIELAB color space. The results of three gamut-mapping experiments showed that the contrast-preserving nature of the sigmoidal-lightness-remapping strategy generated gamut-mapped reproductions that were better matches to the originals than reproductions generated using linear-lightness-compression functions. In addition, chromatic-scaling functions that compressed colors at a higher rate near the gamut surface and less near the achromatic axis produced better matches to the originals than algorithms that performed linear chroma compression throughout color space. A constrained gamut-expansion process, similar to the inverse of the best gamut-compression process found in this experiment, produced reproductions preferred over an expansion process utilizing unconstrained linear expansion

    Two Decades of Colorization and Decolorization for Images and Videos

    Full text link
    Colorization is a computer-aided process, which aims to give color to a gray image or video. It can be used to enhance black-and-white images, including black-and-white photos, old-fashioned films, and scientific imaging results. On the contrary, decolorization is to convert a color image or video into a grayscale one. A grayscale image or video refers to an image or video with only brightness information without color information. It is the basis of some downstream image processing applications such as pattern recognition, image segmentation, and image enhancement. Different from image decolorization, video decolorization should not only consider the image contrast preservation in each video frame, but also respect the temporal and spatial consistency between video frames. Researchers were devoted to develop decolorization methods by balancing spatial-temporal consistency and algorithm efficiency. With the prevalance of the digital cameras and mobile phones, image and video colorization and decolorization have been paid more and more attention by researchers. This paper gives an overview of the progress of image and video colorization and decolorization methods in the last two decades.Comment: 12 pages, 19 figure

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video

    Color Image Processing based on Graph Theory

    Full text link
    [ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.[CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955TESI

    Contrast enhancement and exposure correction using a structure-aware distribution fitting

    Get PDF
    Realce de contraste e correção de exposição são úteis em aplicações domésticas e técnicas, no segundo caso como uma etapa de pré-processamento para outras técnicas ou para ajudar a observação humana. Frequentemente, uma transformação localmente adaptativa é mais adequada para a tarefa do que uma transformação global. Por exemplo, objetos e regiões podem ter níveis de iluminação muito diferentes, fenômenos físicos podem comprometer o contraste em algumas regiões mas não em outras, ou pode ser desejável ter alta visibilidade de detalhes em todas as partes da imagem. Para esses casos, métodos de realce de imagem locais são preferíveis. Embora existam muitos métodos de realce de contraste e correção de exposição disponíveis na literatura, não há uma solução definitiva que forneça um resultado satisfatório em todas as situações, e novos métodos surgem a cada ano. Em especial, os métodos tradicionais baseados em equalização adaptativa de histograma sofrem dos efeitos checkerboard e staircase e de excesso de realce. Esta dissertação propõe um método para realce de contraste e correção de exposição em imagens chamado Structure-Aware Distribution Stretching (SADS). O método ajusta regionalmente à imagem um modelo paramétrico de distribuição de probabilidade, respeitando a estrutura da imagem e as bordas entre as regiões. Isso é feito usando versões regionais das expressões clássicas de estimativa dos parâmetros da distribuição, que são obtidas substituindo a mé- dia amostral presente nas expressões originais por um filtro de suavização que preserva as bordas. Após ajustar a distribuição, a função de distribuição acumulada (CDF) do modelo ajustado e a inversa da CDF da distribuição desejada são aplicadas. Uma heurística ciente de estrutura que detecta regiões suaves é proposta e usada para atenuar as transformações em regiões planas. SADS foi comparado a outros métodos da literatura usando métricas objetivas de avaliação de qualidade de imagem (IQA) sem referência e com referência completa nas tarefas de realce de contraste e correção de exposição simultâneos e na tarefa de defogging/dehazing. Os experimentos indicam um desempenho geral superior do SADS em relação aos métodos comparados para os conjuntos de imagens usados, de acordo com as métricas IQA adotadas.Contrast enhancement and exposure correction are useful in domestic and technical applications, the latter as a preprocessing step for other techniques or for aiding human observation. Often, a locally adaptive transformation is more suitable for the task than a global transformation. For example, objects and regions may have very different levels of illumination, physical phenomena may compromise the contrast at some regions but not at others, or it may be desired to have high visibility of details in all parts of the image. For such cases, local image enhancement methods are preferable. Although there are many contrast enhancement and exposure correction methods available in the literature, there is no definitive solution that provides a satisfactory result in all situations, and new methods emerge each year. In special, traditional adaptive histogram equalization-based methods suffer from checkerboard and staircase effects and from over enhancement. This dissertation proposes a method for contrast enhancement and exposure correction in images named Structure-Aware Distribution Stretching (SADS). The method fits a parametric model of probability distribution to the image regionally while respecting the image structure and edges between regions. This is done using regional versions of the classical expressions for estimating the parameters of the distribution, which are obtained by replacing the sample mean present in the original expressions by an edge-preserving smoothing filter. After fitting the distribution, the cumulative distribution function (CDF) of the adjusted model and the inverse of the CDF of the desired distribution are applied. A structure-aware heuristic to indicate smooth regions is proposed and used to attenuate the transformations in flat regions. SADS was compared with other methods from the literature using objective no-reference and full-reference image quality assessment (IQA) metrics in the tasks of simultaneous contrast enhancement and exposure correction and in the task of defogging/dehazing. The experiments indicate a superior overall performance of SADS with respect to the compared methods for the image sets used, according to the IQA metrics adopted

    General Adaptive Neighborhood Image Processing. Part II: Practical Applications Issues

    Get PDF
    23 pagesInternational audienceThe so-called General Adaptive Neighborhood Image Processing (GANIP) approach is presented in a two parts paper dealing respectively with its theoretical and practical aspects. The General Adaptive Neighborhood (GAN) paradigm, theoretically introduced in Part I [20], allows the building of new image processing transformations using context-dependent analysis. With the help of a specified analyzing criterion, such transformations perform a more significant spatial analysis, taking intrinsically into account the local radiometric, morphological or geometrical characteristics of the image. Moreover they are consistent with the physical and/or physiological settings of the image to be processed, using general linear image processing frameworks. In this paper, the GANIP approach is more particularly studied in the context of Mathematical Morphology (MM). The structuring elements, required for MM, are substituted by GAN-based structuring elements, fitting to the local contextual details of the studied image. The resulting morphological operators perform a really spatiallyadaptive image processing and notably, in several important and practical cases, are connected, which is a great advantage compared to the usual ones that fail to this property. Several GANIP-based results are here exposed and discussed in image filtering, image segmentation, and image enhancement. In order to evaluate the proposed approach, a comparative study is as far as possible proposed between the adaptive and usual morphological operators. Moreover, the interests to work with the Logarithmic Image Processing framework and with the 'contrast' criterion are shown through practical application examples
    • …
    corecore