191 research outputs found
Image Restoration Using One-Dimensional Sobolev Norm Profiles of Noise and Texture
This work is devoted to image restoration (denoising and deblurring) by variational models. As in our prior work [Inverse Probl. Imaging, 3 (2009), pp. 43-68], the image (f) over tilde to be restored is assumed to be the sum of a cartoon component u (a function of bounded variation) and a texture component v (an oscillatory function in a Sobolev space with negative degree of differentiability). In order to separate noise from texture in a blurred noisy textured image, we need to collect some information that helps distinguish noise, especially Gaussian noise, from texture. We know that homogeneous Sobolev spaces of negative differentiability help capture oscillations in images very well; however, these spaces do not directly provide clear distinction between texture and noise, which is also highly oscillatory, especially when the blurring effect is noticeable. Here, we propose a new method for distinguishing noise from texture by considering a family of Sobolev norms corresponding to noise and texture. It turns out that the two Sobolev norm profiles for texture and noise are different, and this enables us to better separate noise from texture during the deblurring process.open0
A flexible space-variant anisotropic regularisation for image restoration with automated parameter selection
We propose a new space-variant anisotropic regularisation term for
variational image restoration, based on the statistical assumption that the
gradients of the target image distribute locally according to a bivariate
generalised Gaussian distribution. The highly flexible variational structure of
the corresponding regulariser encodes several free parameters which hold the
potential for faithfully modelling the local geometry in the image and
describing local orientation preferences. For an automatic estimation of such
parameters, we design a robust maximum likelihood approach and report results
on its reliability on synthetic data and natural images. For the numerical
solution of the corresponding image restoration model, we use an iterative
algorithm based on the Alternating Direction Method of Multipliers (ADMM). A
suitable preliminary variable splitting together with a novel result in
multivariate non-convex proximal calculus yield a very efficient minimisation
algorithm. Several numerical results showing significant quality-improvement of
the proposed model with respect to some related state-of-the-art competitors
are reported, in particular in terms of texture and detail preservation
Sparsity Promoting Regularization for Effective Noise Suppression in SPECT Image Reconstruction
The purpose of this research is to develop an advanced reconstruction method for low-count, hence high-noise, Single-Photon Emission Computed Tomography (SPECT) image reconstruction. It consists of a novel reconstruction model to suppress noise while conducting reconstruction and an efficient algorithm to solve the model. A novel regularizer is introduced as the nonconvex denoising term based on the approximate sparsity of the image under a geometric tight frame transform domain. The deblurring term is based on the negative log-likelihood of the SPECT data model. To solve the resulting nonconvex optimization problem a Preconditioned Fixed-point Proximity Algorithm (PFPA) is introduced. We prove that under appropriate assumptions, PFPA converges to a local solution of the optimization problem at a global O (1/k) convergence rate. Substantial numerical results for simulation data are presented to demonstrate the superiority of the proposed method in denoising, suppressing artifacts and reconstruction accuracy. We simulate noisy 2D SPECT data from two phantoms: hot Gaussian spheres on random lumpy warm background, and the anthropomorphic brain phantom, at high- and low-noise levels (64k and 90k counts, respectively), and reconstruct them using PFPA. We also perform limited comparative studies with selected competing state-of-the-art total variation (TV) and higher-order TV (HOTV) transform-based methods, and widely used post-filtered maximum-likelihood expectation-maximization. We investigate imaging performance of these methods using: Contrast-to-Noise Ratio (CNR), Ensemble Variance Images (EVI), Background Ensemble Noise (BEN), Normalized Mean-Square Error (NMSE), and Channelized Hotelling Observer (CHO) detectability. Each of the competing methods is independently optimized for each metric. We establish that the proposed method outperforms the other approaches in all image quality metrics except NMSE where it is matched by HOTV. The superiority of the proposed method is especially evident in the CHO detectability tests results. We also perform qualitative image evaluation for presence and severity of image artifacts where it also performs better in terms of suppressing staircase artifacts, as compared to TV methods. However, edge artifacts on high-contrast regions persist. We conclude that the proposed method may offer a powerful tool for detection tasks in high-noise SPECT imaging
Space adaptive and hierarchical Bayesian variational models for image restoration
The main contribution of this thesis is the proposal of novel space-variant regularization or penalty terms motivated by a strong statistical rational. In light of the connection between the classical variational framework and the Bayesian formulation, we will focus on the design of highly flexible priors characterized by a large number of unknown parameters. The latter will be automatically estimated by setting up a hierarchical modeling framework, i.e. introducing informative or non-informative hyperpriors depending on the information at hand on the parameters.
More specifically, in the first part of the thesis we will focus on the restoration of natural images, by introducing highly parametrized distribution to model the local behavior of the gradients in the image. The resulting regularizers hold the potential to adapt to the local smoothness, directionality and sparsity in the data. The estimation of the unknown parameters will be addressed by means of non-informative hyperpriors, namely uniform distributions over the parameter domain, thus leading to the classical Maximum Likelihood approach.
In the second part of the thesis, we will address the problem of designing suitable penalty terms for the recovery of sparse signals. The space-variance in the proposed penalties, corresponding to a family of informative hyperpriors, namely generalized gamma hyperpriors, will follow directly from the assumption of the independence of the components in the signal. The study of the properties of the resulting energy functionals will thus lead to the introduction of two hybrid algorithms, aimed at combining the strong sparsity promotion characterizing non-convex penalty terms with the desirable guarantees of convex optimization
Recommended from our members
Nonlinear Data: Theory and Algorithms
Techniques and concepts from differential geometry are used in many parts of applied mathematics today. However, there is no joint community for users of such techniques. The workshop on Nonlinear Data assembled researchers from fields like numerical linear algebra, partial differential equations, and data analysis to explore differential geometry techniques, share knowledge, and learn about new ideas and applications
Mumford-Shah and Potts Regularization for Manifold-Valued Data with Applications to DTI and Q-Ball Imaging
Mumford-Shah and Potts functionals are powerful variational models for
regularization which are widely used in signal and image processing; typical
applications are edge-preserving denoising and segmentation. Being both
non-smooth and non-convex, they are computationally challenging even for scalar
data. For manifold-valued data, the problem becomes even more involved since
typical features of vector spaces are not available. In this paper, we propose
algorithms for Mumford-Shah and for Potts regularization of manifold-valued
signals and images. For the univariate problems, we derive solvers based on
dynamic programming combined with (convex) optimization techniques for
manifold-valued data. For the class of Cartan-Hadamard manifolds (which
includes the data space in diffusion tensor imaging), we show that our
algorithms compute global minimizers for any starting point. For the
multivariate Mumford-Shah and Potts problems (for image regularization) we
propose a splitting into suitable subproblems which we can solve exactly using
the techniques developed for the corresponding univariate problems. Our method
does not require any a priori restrictions on the edge set and we do not have
to discretize the data space. We apply our method to diffusion tensor imaging
(DTI) as well as Q-ball imaging. Using the DTI model, we obtain a segmentation
of the corpus callosum
Compressive Wave Computation
This paper considers large-scale simulations of wave propagation phenomena.
We argue that it is possible to accurately compute a wavefield by decomposing
it onto a largely incomplete set of eigenfunctions of the Helmholtz operator,
chosen at random, and that this provides a natural way of parallelizing wave
simulations for memory-intensive applications.
This paper shows that L1-Helmholtz recovery makes sense for wave computation,
and identifies a regime in which it is provably effective: the one-dimensional
wave equation with coefficients of small bounded variation. Under suitable
assumptions we show that the number of eigenfunctions needed to evolve a sparse
wavefield defined on N points, accurately with very high probability, is
bounded by C log(N) log(log(N)), where C is related to the desired accuracy and
can be made to grow at a much slower rate than N when the solution is sparse.
The PDE estimates that underlie this result are new to the authors' knowledge
and may be of independent mathematical interest; they include an L1 estimate
for the wave equation, an estimate of extension of eigenfunctions, and a bound
for eigenvalue gaps in Sturm-Liouville problems.
Numerical examples are presented in one spatial dimension and show that as
few as 10 percents of all eigenfunctions can suffice for accurate results.
Finally, we argue that the compressive viewpoint suggests a competitive
parallel algorithm for an adjoint-state inversion method in reflection
seismology.Comment: 45 pages, 4 figure
- …