912 research outputs found
Chebyshev and Conjugate Gradient Filters for Graph Image Denoising
In 3D image/video acquisition, different views are often captured with
varying noise levels across the views. In this paper, we propose a graph-based
image enhancement technique that uses a higher quality view to enhance a
degraded view. A depth map is utilized as auxiliary information to match the
perspectives of the two views. Our method performs graph-based filtering of the
noisy image by directly computing a projection of the image to be filtered onto
a lower dimensional Krylov subspace of the graph Laplacian. We discuss two
graph spectral denoising methods: first using Chebyshev polynomials, and second
using iterations of the conjugate gradient algorithm. Our framework generalizes
previously known polynomial graph filters, and we demonstrate through numerical
simulations that our proposed technique produces subjectively cleaner images
with about 1-3 dB improvement in PSNR over existing polynomial graph filters.Comment: 6 pages, 6 figures, accepted to 2014 IEEE International Conference on
Multimedia and Expo Workshops (ICMEW
Graph Spectral Image Processing
Recent advent of graph signal processing (GSP) has spurred intensive studies
of signals that live naturally on irregular data kernels described by graphs
(e.g., social networks, wireless sensor networks). Though a digital image
contains pixels that reside on a regularly sampled 2D grid, if one can design
an appropriate underlying graph connecting pixels with weights that reflect the
image structure, then one can interpret the image (or image patch) as a signal
on a graph, and apply GSP tools for processing and analysis of the signal in
graph spectral domain. In this article, we overview recent graph spectral
techniques in GSP specifically for image / video processing. The topics covered
include image compression, image restoration, image filtering and image
segmentation
Signal reconstruction via operator guiding
Signal reconstruction from a sample using an orthogonal projector onto a
guiding subspace is theoretically well justified, but may be difficult to
practically implement. We propose more general guiding operators, which
increase signal components in the guiding subspace relative to those in a
complementary subspace, e.g., iterative low-pass edge-preserving filters for
super-resolution of images. Two examples of super-resolution illustrate our
technology: a no-flash RGB photo guided using a high resolution flash RGB
photo, and a depth image guided using a high resolution RGB photo.Comment: 5 pages, 8 figures. To appear in Proceedings of SampTA 2017: Sampling
Theory and Applications, 12th International Conference, July 3-7, 2017,
Tallinn, Estoni
Retinex-based Image Denoising / Contrast Enhancement using Gradient Graph Laplacian Regularizer
Images captured in poorly lit conditions are often corrupted by acquisition
noise. Leveraging recent advances in graph-based regularization, we propose a
fast Retinex-based restoration scheme that denoises and contrast-enhances an
image. Specifically, by Retinex theory we first assume that each image pixel is
a multiplication of its reflectance and illumination components. We next assume
that the reflectance and illumination components are piecewise constant (PWC)
and continuous piecewise planar (PWP) signals, which can be recovered via graph
Laplacian regularizer (GLR) and gradient graph Laplacian regularizer (GGLR)
respectively. We formulate quadratic objectives regularized by GLR and GGLR,
which are minimized alternately until convergence by solving linear systems --
with improved condition numbers via proposed preconditioners -- via conjugate
gradient (CG) efficiently. Experimental results show that our algorithm
achieves competitive visual image quality while reducing computation complexity
noticeably
Deep Bilateral Learning for Real-Time Image Enhancement
Performance is a critical challenge in mobile image processing. Given a
reference imaging pipeline, or even human-adjusted pairs of images, we seek to
reproduce the enhancements and enable real-time evaluation. For this, we
introduce a new neural network architecture inspired by bilateral grid
processing and local affine color transforms. Using pairs of input/output
images, we train a convolutional neural network to predict the coefficients of
a locally-affine model in bilateral space. Our architecture learns to make
local, global, and content-dependent decisions to approximate the desired image
transformation. At runtime, the neural network consumes a low-resolution
version of the input image, produces a set of affine transformations in
bilateral space, upsamples those transformations in an edge-preserving fashion
using a new slicing node, and then applies those upsampled transformations to
the full-resolution image. Our algorithm processes high-resolution images on a
smartphone in milliseconds, provides a real-time viewfinder at 1080p
resolution, and matches the quality of state-of-the-art approximation
techniques on a large class of image operators. Unlike previous work, our model
is trained off-line from data and therefore does not require access to the
original operator at runtime. This allows our model to learn complex,
scene-dependent transformations for which no reference implementation is
available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201
- …