204 research outputs found
Undersampled Phase Retrieval with Outliers
We propose a general framework for reconstructing transform-sparse images
from undersampled (squared)-magnitude data corrupted with outliers. This
framework is implemented using a multi-layered approach, combining multiple
initializations (to address the nonconvexity of the phase retrieval problem),
repeated minimization of a convex majorizer (surrogate for a nonconvex
objective function), and iterative optimization using the alternating
directions method of multipliers. Exploiting the generality of this framework,
we investigate using a Laplace measurement noise model better adapted to
outliers present in the data than the conventional Gaussian noise model. Using
simulations, we explore the sensitivity of the method to both the
regularization and penalty parameters. We include 1D Monte Carlo and 2D image
reconstruction comparisons with alternative phase retrieval algorithms. The
results suggest the proposed method, with the Laplace noise model, both
increases the likelihood of correct support recovery and reduces the mean
squared error from measurements containing outliers. We also describe exciting
extensions made possible by the generality of the proposed framework, including
regularization using analysis-form sparsity priors that are incompatible with
many existing approaches.Comment: 11 pages, 9 figure
Four-dimensional tomographic reconstruction by time domain decomposition
Since the beginnings of tomography, the requirement that the sample does not
change during the acquisition of one tomographic rotation is unchanged. We
derived and successfully implemented a tomographic reconstruction method which
relaxes this decades-old requirement of static samples. In the presented
method, dynamic tomographic data sets are decomposed in the temporal domain
using basis functions and deploying an L1 regularization technique where the
penalty factor is taken for spatial and temporal derivatives. We implemented
the iterative algorithm for solving the regularization problem on modern GPU
systems to demonstrate its practical use
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Recent Techniques for Regularization in Partial Differential Equations and Imaging
abstract: Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain.
This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges.
Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.Dissertation/ThesisDoctoral Dissertation Mathematics 201
Sparse Modeling of Grouped Line Spectra
This licentiate thesis focuses on clustered parametric models for estimation of line spectra, when the spectral content of a signal source is assumed to exhibit some form of grouping. Different from previous parametric approaches, which generally require explicit knowledge of the model orders, this thesis exploits sparse modeling, where the orders are implicitly chosen. For line spectra, the non-linear parametric model is approximated by a linear system, containing an overcomplete basis of candidate frequencies, called a dictionary, and a large set of linear response variables that selects and weights the components in the dictionary. Frequency estimates are obtained by solving a convex optimization program, where the sum of squared residuals is minimized. To discourage overfitting and to infer certain structure in the solution, different convex penalty functions are introduced into the optimization. The cost trade-off between fit and penalty is set by some user parameters, as to approximate the true number of spectral lines in the signal, which implies that the response variable will be sparse, i.e., have few non-zero elements. Thus, instead of explicit model orders, the orders are implicitly set by this trade-off. For grouped variables, the dictionary is customized, and appropriate convex penalties selected, so that the solution becomes group sparse, i.e., has few groups with non-zero variables. In an array of sensors, the specific time-delays and attenuations will depend on the source and sensor positions. By modeling this, one may estimate the location of a source. In this thesis, a novel joint location and grouped frequency estimator is proposed, which exploits sparse modeling for both spectral and spatial estimates, showing robustness against sources with overlapping frequency content. For audio signals, this thesis uses two different features for clustering. Pitch is a perceptual property of sound that may be described by the harmonic model, i.e., by a group of spectral lines at integer multiples of a fundamental frequency, which we estimate by exploiting a novel adaptive total variation penalty. The other feature, chroma, is a concept in musical theory, collecting pitches at powers of 2 from each other into groups. Using a chroma dictionary, together with appropriate group sparse penalties, we propose an automatic transcription of the chroma content of a signal
Recommended from our members
Hyperspectral unmixing: a theoretical aspect and applications to CRISM data processing
Hyperspectral imaging has been deployed in earth and planetary remote sensing, and has contributed the development of new methods for monitoring the earth environment and new discoveries in planetary science. It has given scientists and engineers a new way to observe the surface of earth and planetary bodies by measuring the spectroscopic spectrum at a pixel scale.
Hyperspectal images require complex processing before practical use. One of the important goals of hyperspectral imaging is to obtain the images of reflectance spectrum. A raw image obtained by hyperspectral remote sensing usually undergoes conversion to a physical quantity representing the intensity of light energy, called radiance. In order to obtain the reflectance spectrum of surface, the contribution of atmosphere needs to be addressed and then divided by a spectrum of ``white reference.\u27\u27 Furthermore, the obtained reflectance spectra of image pixels are likely to be the mixtures of multiple species due to limited spatial resolution from orbits around planets.
Hyperspectral unmixing is an attempt to unmix those pixels - to identify substantial components and estimate their fractional abundances. Hyperspectral unmixing has been widely explored in the literature, but there are still many aspects yet to be studied. The majority of research focuses on the development of methods to retrieve correct substantial components and accurate fractional abundances. Their theoretical aspects are rarely investigated. Chapter 2 will pursue a theoretical aspect of sparse unmixing, one of the hyperspectral unmixing problems and derive its theoretical conditions that guarantee the correct identification of substantial components.
Hyperspectral unmixing can also be used for other stages of hyperspectral data processing. Chapter 3 explores the application of hyperspectral unmixing to the processing of hyperspectral image acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) onboard the Mars Reconnaissance Orbiter (MRO). In particular, new atmospheric correction and de-noising methods for the CRISM data that use a hyperspectral unmixing to model surface spectra, are introduced. The new methods remove most of the problematic systematic artifacts present in CRISM images and significantly improve signal quality.
Chapter 4 investigates how hyperspectral images acquired from orbits can be combined with ground exploration. In the recent rush of the launch of many Martian ground rover missions, it is important to effectively integrate knowledge obtained by hyperspectral remote sensing from orbits into ground exploration for facilitating Martian exploration. In specific, this dissertation solves the problem of matching hyperspectral image pixels obtained by the CRISM with ground mega-pixel images acquired by the Mast Camera (Mastcam) installed on the Curiosity rover on Mars. A new systematic methodology to map the CRISM and Mastcam images onto high resolution surface topography is developed
- …