518 research outputs found
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Acceleration of the PDHGM on strongly convex subspaces
We propose several variants of the primal-dual method due to Chambolle and
Pock. Without requiring full strong convexity of the objective functions, our
methods are accelerated on subspaces with strong convexity. This yields mixed
rates, with respect to initialisation and with respect to
the dual sequence, and the residual part of the primal sequence. We demonstrate
the efficacy of the proposed methods on image processing problems lacking
strong convexity, such as total generalised variation denoising and total
variation deblurring
A Total Fractional-Order Variation Model for Image Restoration with Non-homogeneous Boundary Conditions and its Numerical Solution
To overcome the weakness of a total variation based model for image
restoration, various high order (typically second order) regularization models
have been proposed and studied recently. In this paper we analyze and test a
fractional-order derivative based total -order variation model, which
can outperform the currently popular high order regularization models. There
exist several previous works using total -order variations for image
restoration; however first no analysis is done yet and second all tested
formulations, differing from each other, utilize the zero Dirichlet boundary
conditions which are not realistic (while non-zero boundary conditions violate
definitions of fractional-order derivatives). This paper first reviews some
results of fractional-order derivatives and then analyzes the theoretical
properties of the proposed total -order variational model rigorously.
It then develops four algorithms for solving the variational problem, one based
on the variational Split-Bregman idea and three based on direct solution of the
discretise-optimization problem. Numerical experiments show that, in terms of
restoration quality and solution efficiency, the proposed model can produce
highly competitive results, for smooth images, to two established high order
models: the mean curvature and the total generalized variation.Comment: 26 page
Inexact Bregman iteration with an application to Poisson data reconstruction
This work deals with the solution of image restoration problems by an
iterative regularization method based on the Bregman iteration. Any iteration of this
scheme requires to exactly compute the minimizer of a function. However, in some
image reconstruction applications, it is either impossible or extremely expensive to
obtain exact solutions of these subproblems. In this paper, we propose an inexact
version of the iterative procedure, where the inexactness in the inner subproblem
solution is controlled by a criterion that preserves the convergence of the Bregman
iteration and its features in image restoration problems. In particular, the method
allows to obtain accurate reconstructions also when only an overestimation of the
regularization parameter is known. The introduction of the inexactness in the iterative
scheme allows to address image reconstruction problems from data corrupted by
Poisson noise, exploiting the recent advances about specialized algorithms for the
numerical minimization of the generalized Kullback–Leibler divergence combined with
a regularization term. The results of several numerical experiments enable to evaluat
First-order Convex Optimization Methods for Signal and Image Processing
In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration com-plexity. Then we look at different techniques, which can be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient meth-ods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple-description problem. We finally present the contributions of the thesis. The remaining parts of the thesis consist of five research papers. The first paper addresses non-smooth first-order convex optimization and the trade-off between accuracy and smoothness of the approximating smooth function. The second and third papers concern discrete linear inverse problems and reliable numerical reconstruction software. The last two papers present a convex opti-mization formulation of the multiple-description problem and a method to solve it in the case of large-scale instances. i i
Block-proximal methods with spatially adapted acceleration
We study and develop (stochastic) primal--dual block-coordinate descent
methods for convex problems based on the method due to Chambolle and Pock. Our
methods have known convergence rates for the iterates and the ergodic gap:
if each block is strongly convex, if no convexity is
present, and more generally a mixed rate for strongly convex
blocks, if only some blocks are strongly convex. Additional novelties of our
methods include blockwise-adapted step lengths and acceleration, as well as the
ability to update both the primal and dual variables randomly in blocks under a
very light compatibility condition. In other words, these variants of our
methods are doubly-stochastic. We test the proposed methods on various image
processing problems, where we employ pixelwise-adapted acceleration
- …