171 research outputs found

    Assessment of sparse-based inpainting for retinal vessel removal

    Full text link
    [EN] Some important eye diseases, like macular degeneration or diabetic retinopathy, can induce changes visible on the retina, for example as lesions. Segmentation of lesions or extraction of textural features from the fundus images are possible steps towards automatic detection of such diseases which could facilitate screening as well as provide support for clinicians. For the task of detecting significant features, retinal blood vessels are considered as being interference on the retinal images. If these blood vessel structures could be suppressed, it might lead to a more accurate segmentation of retinal lesions as well as a better extraction of textural features to be used for pathology detection. This work proposes the use of sparse representations and dictionary learning techniques for retinal vessel inpainting. The performance of the algorithm is tested for greyscale and RGB images from the DRIVE and STARE public databases, employing different neighbourhoods and sparseness factors. Moreover, a comparison with the most common inpainting family, diffusion-based methods, is carried out. For this purpose, two different ways of assessing the quality of the inpainting are presented and used to evaluate the results of the non-artificial inpainting, i.e. where a reference image does not exist. The results suggest that the use of sparse-based inpainting performs very well for retinal blood vessels removal which will be useful for the future detection and classification of eye diseases. (C) 2017 Elsevier B.V. All rights reserved.This work was supported by NILS Science and Sustainability Programme (014-ABEL-IM-2013) and by the Ministerio de Economia y Competitividad of Spain, Project ACRIMA (TIN2013-46751-R). The work of Adrian Colomer has been supported by the Spanish Government under the FPI Grant BES-2014-067889.Colomer, A.; Naranjo Ornedo, V.; Engan, K.; Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication. 59:73-82. https://doi.org/10.1016/j.image.2017.03.018S73825

    DIGITAL INPAINTING ALGORITHMS AND EVALUATION

    Get PDF
    Digital inpainting is the technique of filling in the missing regions of an image or a video using information from surrounding area. This technique has found widespread use in applications such as restoration, error recovery, multimedia editing, and video privacy protection. This dissertation addresses three significant challenges associated with the existing and emerging inpainting algorithms and applications. The three key areas of impact are 1) Structure completion for image inpainting algorithms, 2) Fast and efficient object based video inpainting framework and 3) Perceptual evaluation of large area image inpainting algorithms. One of the main approach of existing image inpainting algorithms in completing the missing information is to follow a two stage process. A structure completion step, to complete the boundaries of regions in the hole area, followed by texture completion process using advanced texture synthesis methods. While the texture synthesis stage is important, it can be argued that structure completion aspect is a vital component in improving the perceptual image inpainting quality. To this end, we introduce a global structure completion algorithm for completion of missing boundaries using symmetry as the key feature. While existing methods for symmetry completion require a-priori information, our method takes a non-parametric approach by utilizing the invariant nature of curvature to complete missing boundaries. Turning our attention from image to video inpainting, we readily observe that existing video inpainting techniques have evolved as an extension of image inpainting techniques. As a result, they suffer from various shortcoming including, among others, inability to handle large missing spatio-temporal regions, significantly slow execution time making it impractical for interactive use and presence of temporal and spatial artifacts. To address these major challenges, we propose a fundamentally different method based on object based framework for improving the performance of video inpainting algorithms. We introduce a modular inpainting scheme in which we first segment the video into constituent objects by using acquired background models followed by inpainting of static background regions and dynamic foreground regions. For static background region inpainting, we use a simple background replacement and occasional image inpainting. To inpaint dynamic moving foreground regions, we introduce a novel sliding-window based dissimilarity measure in a dynamic programming framework. This technique can effectively inpaint large regions of occlusions, inpaint objects that are completely missing for several frames, change in size and pose and has minimal blurring and motion artifacts. Finally we direct our focus on experimental studies related to perceptual quality evaluation of large area image inpainting algorithms. The perceptual quality of large area inpainting technique is inherently a subjective process and yet no previous research has been carried out by taking the subjective nature of the Human Visual System (HVS). We perform subjective experiments using eye-tracking device involving 24 subjects to analyze the effect of inpainting on human gaze. We experimentally show that the presence of inpainting artifacts directly impacts the gaze of an unbiased observer and this in effect has a direct bearing on the subjective rating of the observer. Specifically, we show that the gaze energy in the hole regions of an inpainted image show marked deviations from normal behavior when the inpainting artifacts are readily apparent

    Example based texture synthesis and quantification of texture quality

    Get PDF
    Textures have been used effectively to create realistic environments for virtual worlds by reproducing the surface appearances. One of the widely-used methods for creating textures is the example based texture synthesis method. In this method of generating a texture of arbitrary size, an input image from the real world is provided. This input image is used for the basis of generating large textures. Various methods based on the underlying pattern of the image have been used to create these textures; however, the problem of finding an algorithm which provides a good output is still an open research issue. Moreover, the process of determining the best of the outputs produced by the existing methods is a subjective one and requires human intervention. No quantification measure exists to do a relative comparison between the outputs. This dissertation addresses both problems using a novel approach. The dissertation also proposes an improved algorithm for image inpainting which yields better results than existing methods. Firstly, this dissertation presents a methodology which uses a HSI (hue, saturation, intensity) color model in conjunction with the hybrid approach to improve the quality of the synthesized texture. Unlike the RGB (red, green, blue) color model, the HSI color model is more intuitive and closer to human perception. The hue, saturation and intensity are better indicators than the three color channels used in the RGB model. They represent the exact way, in which the eye sees color in the real world. Secondly, this dissertation addresses the issue of quantifying the quality of the output textures generated using the various texture synthesis methods. Quantifying the quality of the output generated is an important issue and a novel method using statistical measures and a color autocorrelogram has been proposed. It is a two step method; in the first step a measure of the energy, entropy and similar statistical measures helps determine the consistency of the output texture. In the second step an autocorelogram is used to analyze color images as well and quantify them effectively. Finally, this disseratation prsesents a method for improving image inpainting. In the case of inpainting, small sections of the image missing due to noise or other similar reasons can be reproduced using example based texture synthesis. The region of the image immediately surrounding the missing section is treated as sample input. Inpainting can also be used to alter images by removing large sections of the image and filling the removed section with the image data from the rest of the image. For this, a maximum edge detector method is proposed to determine the correct order of section filling and produces significantly better results

    Livrable D5.2 of the PERSEE project : 2D/3D Codec architecture

    Get PDF
    Livrable D5.2 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D5.2 du projet. Son titre : 2D/3D Codec architectur

    Video Outpainting using Conditional Generative Adverarial Networks

    Get PDF
    Recent advancements in machine learning and neural networks have pushed the boundaries of what computers can achieve. Generative adversarial networks are a specific type of neural network that have proved wildly successful at content generation tasks. With this success, filling in missing sections of images or videos became a research topic of interest. Research in video inpainting has made steady progress throughout the years focusing on filling missing content in the center of a frame while research on video outpainting, which focuses on filling missing sections on the edge of the frame, has not. This thesis focuses on outpainting research by using conditional generative adversarial networks (cGANs) which apply a condition, such as an input image, to a generative adversarial network (GAN) in order to reformat traditional 4:3 video into a modern 16:9 format. This is accomplished by using a cGAN typically used for image-to-image translation and adapting it to generate the missing content from video frames. Although generated frames are not capable of accurately reconstructing missing content, this process is capable of producing context aware video that many times seamlessly blends with the original frame. The results of this research provide a glimpse of the possibility of using conditional generative adversarial networks for video outpainting

    Design and implementation of efficient diminished reality mechanisms

    Get PDF
    Se trata de la descripción y estudio del sistema de realidad disminuida que se ha desarrollado en este proyecto. La memoria está elaborada en inglés

    Image inpainting by global structure and texture propagation.

    Get PDF
    Huang, Ting.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (p. 37-41).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Related Area --- p.2Chapter 1.2 --- Previous Work --- p.4Chapter 1.3 --- Proposed Framework --- p.7Chapter 1.4 --- Overview --- p.8Chapter 2 --- Markov Random Fields and Optimization Schemes --- p.9Chapter 2.1 --- MRF Model --- p.10Chapter 2.1.1 --- MAP Understanding --- p.11Chapter 2.2 --- Belief Propagation Optimization Scheme --- p.14Chapter 2.2.1 --- Max-Product BP on MRFs --- p.14Chapter 2.2.2 --- Sum-Product BP on MRFs --- p.15Chapter 3 --- Our Formulation --- p.17Chapter 3.1 --- An MRF Model --- p.18Chapter 3.2 --- Coarse-to-Fine Optimization by BP --- p.21Chapter 3.2.1 --- Coarse-Level Belief Propagation --- p.23Chapter 3.2.2 --- Fine-Level Belief Propagation --- p.24Chapter 3.2.3 --- Performance Enhancement --- p.25Chapter 4 --- Experiments --- p.27Chapter 4.1 --- Comparison --- p.27Chapter 4.2 --- Failure Case --- p.32Chapter 5 --- Conclusion --- p.35Bibliography --- p.3

    Blind Face Restoration for Under-Display Camera via Dictionary Guided Transformer

    Full text link
    By hiding the front-facing camera below the display panel, Under-Display Camera (UDC) provides users with a full-screen experience. However, due to the characteristics of the display, images taken by UDC suffer from significant quality degradation. Methods have been proposed to tackle UDC image restoration and advances have been achieved. There are still no specialized methods and datasets for restoring UDC face images, which may be the most common problem in the UDC scene. To this end, considering color filtering, brightness attenuation, and diffraction in the imaging process of UDC, we propose a two-stage network UDC Degradation Model Network named UDC-DMNet to synthesize UDC images by modeling the processes of UDC imaging. Then we use UDC-DMNet and high-quality face images from FFHQ and CelebA-Test to create UDC face training datasets FFHQ-P/T and testing datasets CelebA-Test-P/T for UDC face restoration. We propose a novel dictionary-guided transformer network named DGFormer. Introducing the facial component dictionary and the characteristics of the UDC image in the restoration makes DGFormer capable of addressing blind face restoration in UDC scenarios. Experiments show that our DGFormer and UDC-DMNet achieve state-of-the-art performance
    • …
    corecore