30 research outputs found
Image Restoration by Variable Splitting based on Total Variant Regularizer
The aim of image restoration is to obtain a higher quality desired image from a degraded image. In this strategy, an image inpainting method fills the degraded or lost area of the image by appropriate information. This is performed in such a way so that the obtained image is undistinguishable for a casual person who is unfamiliar with the original image. In this paper, different images are degraded by two procedures; one is to blur and to add noise to the original image, and the other one is to lose a percentage of the pixels belonging to the original image. Then, the degraded image is restored by the proposed method and also two state-of-art methods. For image restoration, it is required to use optimization methods. In this paper, we use a linear restoration method based on the total variation regularizer. The variable of optimization problem is split, and the new optimization problem is solved by using Lagrangian augmented method. The experimental results show that the proposed method is faster, and the restored images have higher quality compared to the other methods
First-order Convex Optimization Methods for Signal and Image Processing
In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration com-plexity. Then we look at different techniques, which can be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient meth-ods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple-description problem. We finally present the contributions of the thesis. The remaining parts of the thesis consist of five research papers. The first paper addresses non-smooth first-order convex optimization and the trade-off between accuracy and smoothness of the approximating smooth function. The second and third papers concern discrete linear inverse problems and reliable numerical reconstruction software. The last two papers present a convex opti-mization formulation of the multiple-description problem and a method to solve it in the case of large-scale instances. i i
What's in a Prior? Learned Proximal Networks for Inverse Problems
Proximal operators are ubiquitous in inverse problems, commonly appearing as
part of algorithmic strategies to regularize problems that are otherwise
ill-posed. Modern deep learning models have been brought to bear for these
tasks too, as in the framework of plug-and-play or deep unrolling, where they
loosely resemble proximal operators. Yet, something essential is lost in
employing these purely data-driven approaches: there is no guarantee that a
general deep network represents the proximal operator of any function, nor is
there any characterization of the function for which the network might provide
some approximate proximal. This not only makes guaranteeing convergence of
iterative schemes challenging but, more fundamentally, complicates the analysis
of what has been learned by these networks about their training data. Herein we
provide a framework to develop learned proximal networks (LPN), prove that they
provide exact proximal operators for a data-driven nonconvex regularizer, and
show how a new training strategy, dubbed proximal matching, provably promotes
the recovery of the log-prior of the true data distribution. Such LPN provide
general, unsupervised, expressive proximal operators that can be used for
general inverse problems with convergence guarantees. We illustrate our results
in a series of cases of increasing complexity, demonstrating that these models
not only result in state-of-the-art performance, but provide a window into the
resulting priors learned from data
Sparse and Redundant Representations for Inverse Problems and Recognition
Sparse and redundant representation of data enables the
description of signals as linear combinations of a few atoms from
a dictionary. In this dissertation, we study applications of
sparse and redundant representations in inverse problems and
object recognition. Furthermore, we propose two novel imaging
modalities based on the recently introduced theory of Compressed
Sensing (CS).
This dissertation consists of four major parts. In the first part
of the dissertation, we study a new type of deconvolution
algorithm that is based on estimating the image from a shearlet
decomposition. Shearlets provide a multi-directional and
multi-scale decomposition that has been mathematically shown to
represent distributed discontinuities such as edges better than
traditional wavelets. We develop a deconvolution algorithm that
allows for the approximation inversion operator to be controlled
on a multi-scale and multi-directional basis. Furthermore, we
develop a method for the automatic determination of the threshold
values for the noise shrinkage for each scale and direction
without explicit knowledge of the noise variance using a
generalized cross validation method.
In the second part of the dissertation, we study a reconstruction
method that recovers highly undersampled images assumed to have a
sparse representation in a gradient domain by using partial
measurement samples that are collected in the Fourier domain. Our
method makes use of a robust generalized Poisson solver that
greatly aids in achieving a significantly improved performance
over similar proposed methods. We will demonstrate by experiments
that this new technique is more flexible to work with either
random or restricted sampling scenarios better than its
competitors.
In the third part of the dissertation, we introduce a novel
Synthetic Aperture Radar (SAR) imaging modality which can provide
a high resolution map of the spatial distribution of targets and
terrain using a significantly reduced number of needed transmitted
and/or received electromagnetic waveforms. We demonstrate that
this new imaging scheme, requires no new hardware components and
allows the aperture to be compressed. Also, it
presents many new applications and advantages which include strong
resistance to countermesasures and interception, imaging much
wider swaths and reduced on-board storage requirements.
The last part of the dissertation deals with object recognition
based on learning dictionaries for simultaneous sparse signal
approximations and feature extraction. A dictionary is learned
for each object class based on given training examples which
minimize the representation error with a sparseness constraint. A
novel test image is then projected onto the span of the atoms in
each learned dictionary. The residual vectors along with the
coefficients are then used for recognition. Applications to
illumination robust face recognition and automatic target
recognition are presented
Multilevel optimisation for computer vision
The recent spark in machine learning and computer vision methods requiring increasingly larger datasets has motivated the introduction of optimisation algorithms specifically tailored to solve very large problems within practical time constraints. This demand in algorithms challenges the practicability of state of the art methods requiring new approaches that can take advantage of not only the problem’s mathematical structure, but also its data structure. Fortunately, such structure is present in many computer vision applications, where the problems can be modelled with varying degrees of fidelity. This structure suggests using multiscale models and thus multilevel algorithms.
The objective of this thesis is to develop, implement and test provably convergent multilevel optimisation algorithms for convex composite optimisation problems in general and its applications in computer vision in particular. Our first multilevel algorithm solves convex composite optimisation problem and it is most efficient particularly for the robust facial recognition task. The method uses concepts from proximal gradient, mirror descent and multilevel optimisation algorithms, thus we call it multilevel accelerated gradient mirror descent algorithm (MAGMA). We first show that MAGMA has the same theoretical convergence rate as the state of the art first order methods and has much lower per iteration complexity. Then we demonstrate its practical advantage on many facial recognition problems. The second part of the thesis introduces new multilevel procedure most appropriate for the robust PCA problems requiring iterative SVD computations. We propose to exploit the multiscale structure present in these problems by constructing lower dimensional matrices and use its singular values for each iteration of the optimisation procedure. We implement this approach on three different optimisation algorithms - inexact ALM, Frank-Wolfe Thresholding and non-convex alternating projections. In this case as well we show that these multilevel algorithms converge (to an exact or approximate) solution with the same convergence rate as their standard counterparts and test all three methods on numerous synthetic and real life problems demonstrating that the multilevel algorithms are not only much faster, but also solve problems that often cannot be solved by their standard counterparts.Open Acces
Acceleration Methods for MRI
Acceleration methods are a critical area of research for MRI. Two of the most important acceleration techniques involve parallel imaging and compressed sensing. These advanced signal processing techniques have the potential to drastically reduce scan times and provide radiologists with new information for diagnosing disease. However, many of these new techniques require solving difficult optimization problems, which motivates the development of more advanced algorithms to solve them. In addition, acceleration methods have not reached maturity in some applications, which motivates the development of new models tailored to these applications. This dissertation makes advances in three different areas of accelerations. The first is the development of a new algorithm (called B1-Based, Adaptive Restart, Iterative Soft Thresholding Algorithm or BARISTA), that solves a parallel MRI optimization problem with compressed sensing assumptions. BARISTA is shown to be 2-3 times faster and more robust to parameter selection than current state-of-the-art variable splitting methods. The second contribution is the extension of BARISTA ideas to non-Cartesian trajectories that also leads to a 2-3 times acceleration over previous methods. The third contribution is the development of a new model for functional MRI that enables a 3-4 factor of acceleration of effective temporal resolution in functional MRI scans. Several variations of the new model are proposed, with an ROC curve analysis showing that a combination low-rank/sparsity model giving the best performance in identifying the resting-state motor network.PhDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120841/1/mmuckley_1.pd
Modeling and Development of Iterative Reconstruction Algorithms in Emerging X-ray Imaging Technologies
Many new promising X-ray-based biomedical imaging technologies have emerged over the last two decades. Five different novel X-ray based imaging technologies are discussed in this dissertation: differential phase-contrast tomography (DPCT), grating-based phase-contrast tomography (GB-PCT), spectral-CT (K-edge imaging), cone-beam computed tomography (CBCT), and in-line X-ray phase contrast (XPC) tomosynthesis. For each imaging modality, one or more specific problems prevent them being effectively or efficiently employed in clinical applications have been discussed. Firstly, to mitigate the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods in DPCT, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction. Secondly, to improve image quality in grating-based phase-contrast tomography, we incorporate 2nd order statistical properties of the object property sinograms, including correlations between them, into the formulation of an advanced multi-channel (MC) image reconstruction algorithm, which reconstructs three object properties simultaneously. We developed an advanced algorithm based on the proximal point algorithm and the augmented Lagrangian method to rapidly solve the MC reconstruction problem. Thirdly, to mitigate image artifacts that arise from reduced-view and/or noisy decomposed sinogram data in K-edge imaging, we exploited the inherent sparseness of typical K-edge objects and incorporated the statistical properties of the decomposed sinograms to formulate two penalized weighted least square problems with a total variation (TV) penalty and a weighted sum of a TV penalty and an l1-norm penalty with a wavelet sparsifying transform. We employed a fast iterative shrinkage/thresholding algorithm (FISTA) and splitting-based FISTA algorithm to solve these two PWLS problems. Fourthly, to enable advanced iterative algorithms to obtain better diagnostic images and accurate patient positioning information in image-guided radiation therapy for CBCT in a few minutes, two accelerated variants of the FISTA for PLS-based image reconstruction are proposed. The algorithm acceleration is obtained by replacing the original gradient-descent step by a sub-problem that is solved by use of the ordered subset concept (OS-SART). In addition, we also present efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units (GPUs). Finally, we employed our developed accelerated version of FISTA for dealing with the incomplete (and often noisy) data inherent to in-line XPC tomosynthesis which combines the concepts of tomosynthesis and in-line XPC imaging to utilize the advantages of both for biological imaging applications. We also investigate the depth resolution properties of XPC tomosynthesis and demonstrate that the z-resolution properties of XPC tomosynthesis is superior to that of conventional absorption-based tomosynthesis. To investigate all these proposed novel strategies and new algorithms in these different imaging modalities, we conducted computer simulation studies and real experimental data studies. The proposed reconstruction methods will facilitate the clinical or preclinical translation of these emerging imaging methods