3,190 research outputs found
Isotropic inverse-problem approach for two-dimensional phase unwrapping
In this paper, we propose a new technique for two-dimensional phase
unwrapping. The unwrapped phase is found as the solution of an inverse problem
that consists in the minimization of an energy functional. The latter includes
a weighted data-fidelity term that favors sparsity in the error between the
true and wrapped phase differences, as well as a regularizer based on
higher-order total-variation. One desirable feature of our method is its
rotation invariance, which allows it to unwrap a much larger class of images
compared to the state of the art. We demonstrate the effectiveness of our
method through several experiments on simulated and real data obtained through
the tomographic phase microscope. The proposed method can enhance the
applicability and outreach of techniques that rely on quantitative phase
evaluation
Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization
Multiplicative noise (also known as speckle noise) models are central to the
study of coherent imaging systems, such as synthetic aperture radar and sonar,
and ultrasound and laser imaging. These models introduce two additional layers
of difficulties with respect to the standard Gaussian additive noise scenario:
(1) the noise is multiplied by (rather than added to) the original image; (2)
the noise is not Gaussian, with Rayleigh and Gamma being commonly used
densities. These two features of multiplicative noise models preclude the
direct application of most state-of-the-art algorithms, which are designed for
solving unconstrained optimization problems where the objective has two terms:
a quadratic data term (log-likelihood), reflecting the additive and Gaussian
nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a
total variation or wavelet-based regularizer/prior). In this paper, we address
these difficulties by: (1) converting the multiplicative model into an additive
one by taking logarithms, as proposed by some other authors; (2) using variable
splitting to obtain an equivalent constrained problem; and (3) dealing with
this optimization problem using the augmented Lagrangian framework. A set of
experiments shows that the proposed method, which we name MIDAL (multiplicative
image denoising by augmented Lagrangian), yields state-of-the-art results both
in terms of speed and denoising performance.Comment: 11 pages, 7 figures, 2 tables. To appear in the IEEE Transactions on
Image Processing
Fast Image Recovery Using Variable Splitting and Constrained Optimization
We propose a new fast algorithm for solving one of the standard formulations
of image restoration and reconstruction which consists of an unconstrained
optimization problem where the objective includes an data-fidelity
term and a non-smooth regularizer. This formulation allows both wavelet-based
(with orthogonal or frame-based representations) regularization or
total-variation regularization. Our approach is based on a variable splitting
to obtain an equivalent constrained optimization formulation, which is then
addressed with an augmented Lagrangian method. The proposed algorithm is an
instance of the so-called "alternating direction method of multipliers", for
which convergence has been proved. Experiments on a set of image restoration
and reconstruction benchmark problems show that the proposed algorithm is
faster than the current state of the art methods.Comment: Submitted; 11 pages, 7 figures, 6 table
Learning Deep CNN Denoiser Prior for Image Restoration
Model-based optimization methods and discriminative learning methods have
been the two dominant strategies for solving various inverse problems in
low-level vision. Typically, those two kinds of methods have their respective
merits and drawbacks, e.g., model-based optimization methods are flexible for
handling different inverse problems but are usually time-consuming with
sophisticated priors for the purpose of good performance; in the meanwhile,
discriminative learning methods have fast testing speed but their application
range is greatly restricted by the specialized task. Recent works have revealed
that, with the aid of variable splitting techniques, denoiser prior can be
plugged in as a modular part of model-based optimization methods to solve other
inverse problems (e.g., deblurring). Such an integration induces considerable
advantage when the denoiser is obtained via discriminative learning. However,
the study of integration with fast discriminative denoiser prior is still
lacking. To this end, this paper aims to train a set of fast and effective CNN
(convolutional neural network) denoisers and integrate them into model-based
optimization method to solve other inverse problems. Experimental results
demonstrate that the learned set of denoisers not only achieve promising
Gaussian denoising results but also can be used as prior to deliver good
performance for various low-level vision applications.Comment: Accepted to CVPR 2017. Code: https://github.com/cszn/ircn
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Depth Superresolution using Motion Adaptive Regularization
Spatial resolution of depth sensors is often significantly lower compared to
that of conventional optical cameras. Recent work has explored the idea of
improving the resolution of depth using higher resolution intensity as a side
information. In this paper, we demonstrate that further incorporating temporal
information in videos can significantly improve the results. In particular, we
propose a novel approach that improves depth resolution, exploiting the
space-time redundancy in the depth and intensity using motion-adaptive low-rank
regularization. Experiments confirm that the proposed approach substantially
improves the quality of the estimated high-resolution depth. Our approach can
be a first component in systems using vision techniques that rely on high
resolution depth information
Four-dimensional tomographic reconstruction by time domain decomposition
Since the beginnings of tomography, the requirement that the sample does not
change during the acquisition of one tomographic rotation is unchanged. We
derived and successfully implemented a tomographic reconstruction method which
relaxes this decades-old requirement of static samples. In the presented
method, dynamic tomographic data sets are decomposed in the temporal domain
using basis functions and deploying an L1 regularization technique where the
penalty factor is taken for spatial and temporal derivatives. We implemented
the iterative algorithm for solving the regularization problem on modern GPU
systems to demonstrate its practical use
- …