595 research outputs found

    Perceptual Color Image Smoothing via a New Region-Based PDE Scheme

    Get PDF
    In this paper, we present a new color image regularization method using a rotating smoothing filter. This approach combines a pixel classification method, which roughly determines if a pixel belongs to a homogenous region or an edge with an anisotropic perceptual edge detector capable of computing two precise diffusion directions. Using a now classical formulation, image regularization is here treated as a variational model, where successive iterations of associated PDE (Partial Differential Equation) are equivalent to a diffusion process. Our model uses two kinds of diffusion: isotropic and anisotropic diffusion. Anisotropic diffusion is accurately controlled near edges and corners, while isotropic diffusion is applied to smooth regions either homogeneous or corrupted by noise. A comparison of our approach with other regularization methods applied on real images demonstrate that our model is able to efficiently restore images as well as handle diffusion, and at the same time preserve edges and corners well

    Image Restoration using Automatic Damaged Regions Detection and Machine Learning-Based Inpainting Technique

    Get PDF
    In this dissertation we propose two novel image restoration schemes. The first pertains to automatic detection of damaged regions in old photographs and digital images of cracked paintings. In cases when inpainting mask generation cannot be completely automatic, our detection algorithm facilitates precise mask creation, particularly useful for images containing damage that is tedious to annotate or difficult to geometrically define. The main contribution of this dissertation is the development and utilization of a new inpainting technique, region hiding, to repair a single image by training a convolutional neural network on various transformations of that image. Region hiding is also effective in object removal tasks. Lastly, we present a segmentation system for distinguishing glands, stroma, and cells in slide images, in addition to current results, as one component of an ongoing project to aid in colon cancer prognostication

    DIGITAL INPAINTING ALGORITHMS AND EVALUATION

    Get PDF
    Digital inpainting is the technique of filling in the missing regions of an image or a video using information from surrounding area. This technique has found widespread use in applications such as restoration, error recovery, multimedia editing, and video privacy protection. This dissertation addresses three significant challenges associated with the existing and emerging inpainting algorithms and applications. The three key areas of impact are 1) Structure completion for image inpainting algorithms, 2) Fast and efficient object based video inpainting framework and 3) Perceptual evaluation of large area image inpainting algorithms. One of the main approach of existing image inpainting algorithms in completing the missing information is to follow a two stage process. A structure completion step, to complete the boundaries of regions in the hole area, followed by texture completion process using advanced texture synthesis methods. While the texture synthesis stage is important, it can be argued that structure completion aspect is a vital component in improving the perceptual image inpainting quality. To this end, we introduce a global structure completion algorithm for completion of missing boundaries using symmetry as the key feature. While existing methods for symmetry completion require a-priori information, our method takes a non-parametric approach by utilizing the invariant nature of curvature to complete missing boundaries. Turning our attention from image to video inpainting, we readily observe that existing video inpainting techniques have evolved as an extension of image inpainting techniques. As a result, they suffer from various shortcoming including, among others, inability to handle large missing spatio-temporal regions, significantly slow execution time making it impractical for interactive use and presence of temporal and spatial artifacts. To address these major challenges, we propose a fundamentally different method based on object based framework for improving the performance of video inpainting algorithms. We introduce a modular inpainting scheme in which we first segment the video into constituent objects by using acquired background models followed by inpainting of static background regions and dynamic foreground regions. For static background region inpainting, we use a simple background replacement and occasional image inpainting. To inpaint dynamic moving foreground regions, we introduce a novel sliding-window based dissimilarity measure in a dynamic programming framework. This technique can effectively inpaint large regions of occlusions, inpaint objects that are completely missing for several frames, change in size and pose and has minimal blurring and motion artifacts. Finally we direct our focus on experimental studies related to perceptual quality evaluation of large area image inpainting algorithms. The perceptual quality of large area inpainting technique is inherently a subjective process and yet no previous research has been carried out by taking the subjective nature of the Human Visual System (HVS). We perform subjective experiments using eye-tracking device involving 24 subjects to analyze the effect of inpainting on human gaze. We experimentally show that the presence of inpainting artifacts directly impacts the gaze of an unbiased observer and this in effect has a direct bearing on the subjective rating of the observer. Specifically, we show that the gaze energy in the hole regions of an inpainted image show marked deviations from normal behavior when the inpainting artifacts are readily apparent

    Real-Time Anisotropic Diffusion using Space-Variant Vision

    Full text link
    Many computer and robot vision applications require multi-scale image analysis. Classically, this has been accomplished through the use of a linear scale-space, which is constructed by convolution of visual input with Gaussian kernels of varying size (scale). This has been shown to be equivalent to the solution of a linear diffusion equation on an infinite domain, as the Gaussian is the Green's function of such a system (Koenderink, 1984). Recently, much work has been focused on the use of a variable conductance function resulting in anisotropic diffusion described by a nonlinear partial differential equation (PDF). The use of anisotropic diffusion with a conductance coefficient which is a decreasing function of the gradient magnitude has been shown to enhance edges, while decreasing some types of noise (Perona and Malik, 1987). Unfortunately, the solution of the anisotropic diffusion equation requires the numerical integration of a nonlinear PDF which is a costly process when carried out on a fixed mesh such as a typical image. In this paper we show that the complex log transformation, variants of which are universally used in mammalian retino-cortical systems, allows the nonlinear diffusion equation to be integrated at exponentially enhanced rates due to the non-uniform mesh spacing inherent in the log domain. The enhanced integration rates, coupled with the intrinsic compression of the complex log transformation, yields a seed increase of between two and three orders of magnitude, providing a means of performing real-time image enhancement using anisotropic diffusion.Office of Naval Research (N00014-95-I-0409

    Sub-Riemannian geometry and its applications to Image Processing

    Get PDF
    Master's Thesis in MathematicsMAT399MAMN-MA
    corecore