2,282 research outputs found

    Edge-enhancing image smoothing.

    Get PDF
    Xu, Yi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 62-69).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Organization --- p.4Chapter 2 --- Background and Motivation --- p.7Chapter 2.1 --- ID Mondrian Smoothing --- p.9Chapter 2.2 --- 2D Formulation --- p.13Chapter 3 --- Solver --- p.16Chapter 3.1 --- More Analysis --- p.20Chapter 4 --- Edge Extraction --- p.26Chapter 4.1 --- Related work --- p.26Chapter 4.2 --- Method and Results --- p.28Chapter 4.3 --- Summary --- p.32Chapter 5 --- Image Abstraction and Pencil Sketching --- p.35Chapter 5.1 --- Related Work --- p.35Chapter 5.2 --- Method and Results --- p.36Chapter 5.3 --- Summary --- p.40Chapter 6 --- Clip-Art Compression Artifact Removal --- p.41Chapter 6.1 --- Related work --- p.41Chapter 6.2 --- Method and Results --- p.43Chapter 6.3 --- Summary --- p.46Chapter 7 --- Layer-Based Contrast Manipulation --- p.49Chapter 7.1 --- Related Work --- p.49Chapter 7.2 --- Method and Results --- p.50Chapter 7.2.1 --- Edge Adjustment --- p.51Chapter 7.2.2 --- Detail Magnification --- p.54Chapter 7.2.3 --- Tone Mapping --- p.55Chapter 7.3 --- Summary --- p.56Chapter 8 --- Conclusion and Discussion --- p.59Bibliography --- p.6

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior

    Get PDF
    Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we propose a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect

    Near-Infrared Fusion for Photorealistic Image Dehazing

    Get PDF
    Scattering of light due to the presence of aerosol particles along the path of radiation causes atmospheric haze in images. This scattering is significantly less severe in longer wavelength bands than in shorter ones, thus the importance of near-infrared (NIR) information for dehazing color images. This paper first presents an adaptive hyperspectral al- gorithm that analyzes intensity inconsistencies across spectral bands. It then leverages the algorithm’s results to preserve photorealism of the visible color image during the dehazing. The color images are dehazed through a hyperspectral fusion of color and NIR images, taking into account any inconsistencies that can affect the photorealism. Our dehazing results on real images contain no halo or aliasing artifacts in hazy regions and successfully preserve the color image elsewhere

    Nonlinear kernel based feature maps for blur-sensitive unsharp masking of JPEG images

    Get PDF
    In this paper, a method for estimating the blur regions of an image is first proposed, resorting to a mixture of linear and nonlinear convolutional kernels. The blur map obtained is then utilized to enhance images such that the enhancement strength is an inverse function of the amount of measured blur. The blur map can also be used for tasks such as attention-based object classification, low light image enhancement, and more. A CNN architecture is trained with nonlinear upsampling layers using a standard blur detection benchmark dataset, with the help of blur target maps. Further, it is proposed to use the same architecture to build maps of areas affected by the typical JPEG artifacts, ringing and blockiness. The blur map and the artifact map pair permit to build an activation map for the enhancement of a (possibly JPEG compressed) image. Extensive experiments on standard test images verify the quality of the maps obtained using the algorithm and their effectiveness in locally controlling the enhancement, for superior perceptual quality. Last but not least, the computation time for generating these maps is much lower than the one of other comparable algorithms

    DEEP LEARNING FOR IMAGE RESTORATION AND ROBOTIC VISION

    Get PDF
    Traditional model-based approach requires the formulation of mathematical model, and the model often has limited performance. The quality of an image may degrade due to a variety of reasons: It could be the context of scene is affected by weather conditions such as haze, rain, and snow; It\u27s also possible that there is some noise generated during image processing/transmission (e.g., artifacts generated during compression.). The goal of image restoration is to restore the image back to desirable quality both subjectively and objectively. Agricultural robotics is gaining interest these days since most agricultural works are lengthy and repetitive. Computer vision is crucial to robots especially the autonomous ones. However, it is challenging to have a precise mathematical model to describe the aforementioned problems. Compared with traditional approach, learning-based approach has an edge since it does not require any model to describe the problem. Moreover, learning-based approach now has the best-in-class performance on most of the vision problems such as image dehazing, super-resolution, and image recognition. In this dissertation, we address the problem of image restoration and robotic vision with deep learning. These two problems are highly related with each other from a unique network architecture perspective: It is essential to select appropriate networks when dealing with different problems. Specifically, we solve the problems of single image dehazing, High Efficiency Video Coding (HEVC) loop filtering and super-resolution, and computer vision for an autonomous robot. Our technical contributions are threefold: First, we propose to reformulate haze as a signal-dependent noise which allows us to uncover it by learning a structural residual. Based on our novel reformulation, we solve dehazing with recursive deep residual network and generative adversarial network which emphasizes on objective and perceptual quality, respectively. Second, we replace traditional filters in HEVC with a Convolutional Neural Network (CNN) filter. We show that our CNN filter could achieve 7% BD-rate saving when compared with traditional filters such as bilateral and deblocking filter. We also propose to incorporate a multi-scale CNN super-resolution module into HEVC. Such post-processing module could improve visual quality under extremely low bandwidth. Third, a transfer learning technique is implemented to support vision and autonomous decision making of a precision pollination robot. Good experimental results are reported with real-world data

    Entropy-Adaptive Filtering

    Get PDF
    This publication describes an entropy-adaptive filtering, which reduces compression artifacts for videos of any given complexity and at any given video-encoding bit-rate. Unlike other video filtering designs, the entropy-adaptive filtering minimizes the likelihood of compression artifacts by reducing the entropy level of the input video
    • …
    corecore