588 research outputs found
A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging
Recently, impressive denoising results have been achieved by Bayesian
approaches which assume Gaussian models for the image patches. This improvement
in performance can be attributed to the use of per-patch models. Unfortunately
such an approach is particularly unstable for most inverse problems beyond
denoising. In this work, we propose the use of a hyperprior to model image
patches, in order to stabilize the estimation procedure. There are two main
advantages to the proposed restoration scheme: Firstly it is adapted to
diagonal degradation matrices, and in particular to missing data problems (e.g.
inpainting of missing pixels or zooming). Secondly it can deal with signal
dependent noise models, particularly suited to digital cameras. As such, the
scheme is especially adapted to computational photography. In order to
illustrate this point, we provide an application to high dynamic range imaging
from a single image taken with a modified sensor, which shows the effectiveness
of the proposed scheme.Comment: Some figures are reduced to comply with arxiv's size constraints.
Full size images are available as HAL technical report hal-01107519v5, IEEE
Transactions on Computational Imaging, 201
Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation
In this paper, we aim at recovering an unknown signal x0 from noisy
L1measurements y=Phi*x0+w, where Phi is an ill-conditioned or singular linear
operator and w accounts for some noise. To regularize such an ill-posed inverse
problem, we impose an analysis sparsity prior. More precisely, the recovery is
cast as a convex optimization program where the objective is the sum of a
quadratic data fidelity term and a regularization term formed of the L1-norm of
the correlations between the sought after signal and atoms in a given
(generally overcomplete) dictionary. The L1-sparsity analysis prior is weighted
by a regularization parameter lambda>0. In this paper, we prove that any
minimizers of this problem is a piecewise-affine function of the observations y
and the regularization parameter lambda. As a byproduct, we exploit these
properties to get an objectively guided choice of lambda. In particular, we
develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE)
and show that it is an unbiased and reliable estimator of an appropriately
defined risk. The latter encompasses special cases such as the prediction risk,
the projection risk and the estimation risk. We apply these risk estimators to
the special case of L1-sparsity analysis regularization. We also discuss
implementation issues and propose fast algorithms to solve the L1 analysis
minimization problem and to compute the associated GSURE. We finally illustrate
the applicability of our framework to parameter(s) selection on several imaging
problems
Graph Spectral Image Processing
Recent advent of graph signal processing (GSP) has spurred intensive studies
of signals that live naturally on irregular data kernels described by graphs
(e.g., social networks, wireless sensor networks). Though a digital image
contains pixels that reside on a regularly sampled 2D grid, if one can design
an appropriate underlying graph connecting pixels with weights that reflect the
image structure, then one can interpret the image (or image patch) as a signal
on a graph, and apply GSP tools for processing and analysis of the signal in
graph spectral domain. In this article, we overview recent graph spectral
techniques in GSP specifically for image / video processing. The topics covered
include image compression, image restoration, image filtering and image
segmentation
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Piecewise smooth reconstruction of normal vector field on digital data
International audienceWe propose a novel method to regularize a normal vector field defined on a digital surface (boundary of a set of voxels). When the digital surface is a digitization of a piecewise smooth manifold, our method localizes sharp features (edges) while regularizing the input normal vector field at the same time. It relies on the optimisation of a variant of the Ambrosio-Tortorelli functional, originally defined for denoising and contour extraction in image processing [AT90]. We reformulate this functional to digital surface processing thanks to discrete calculus operators. Experiments show that the output normal field is very robust to digitization artifacts or noise, and also fairly independent of the sampling resolution. The method allows the user to choose independently the amount of smoothing and the length of the set of discontinuities. Sharp and vanishing features are correctly delineated even on extremely damaged data. Finally, our method can be used to enhance considerably the output of state-of- the-art normal field estimators like Voronoi Covariance Measure [MOG11] or Randomized Hough Transform [BM12]
- …