42 research outputs found
A Theoretically Guaranteed Quaternion Weighted Schatten p-norm Minimization Method for Color Image Restoration
Inspired by the fact that the matrix formulated by nonlocal similar patches
in a natural image is of low rank, the rank approximation issue have been
extensively investigated over the past decades, among which weighted nuclear
norm minimization (WNNM) and weighted Schatten -norm minimization (WSNM) are
two prevailing methods have shown great superiority in various image
restoration (IR) problems. Due to the physical characteristic of color images,
color image restoration (CIR) is often a much more difficult task than its
grayscale image counterpart. However, when applied to CIR, the traditional
WNNM/WSNM method only processes three color channels individually and fails to
consider their cross-channel correlations. Very recently, a quaternion-based
WNNM approach (QWNNM) has been developed to mitigate this issue, which is
capable of representing the color image as a whole in the quaternion domain and
preserving the inherent correlation among the three color channels. Despite its
empirical success, unfortunately, the convergence behavior of QWNNM has not
been strictly studied yet. In this paper, on the one side, we extend the WSNM
into quaternion domain and correspondingly propose a novel quaternion-based
WSNM model (QWSNM) for tackling the CIR problems. Extensive experiments on two
representative CIR tasks, including color image denoising and deblurring,
demonstrate that the proposed QWSNM method performs favorably against many
state-of-the-art alternatives, in both quantitative and qualitative
evaluations. On the other side, more importantly, we preliminarily provide a
theoretical convergence analysis, that is, by modifying the quaternion
alternating direction method of multipliers (QADMM) through a simple
continuation strategy, we theoretically prove that both the solution sequences
generated by the QWNNM and QWSNM have fixed-point convergence guarantees.Comment: 46 pages, 10 figures; references adde
Applied Harmonic Analysis and Sparse Approximation
Efficiently analyzing functions, in particular multivariate functions, is a key problem in applied mathematics. The area of applied harmonic analysis has a significant impact on this problem by providing methodologies both for theoretical questions and for a wide range of applications in technology and science, such as image processing. Approximation theory, in particular the branch of the theory of sparse approximations, is closely intertwined with this area with a lot of recent exciting developments in the intersection of both. Research topics typically also involve related areas such as convex optimization, probability theory, and Banach space geometry. The workshop was the continuation of a first event in 2012 and intended to bring together world leading experts in these areas, to report on recent developments, and to foster new developments and collaborations
Recommended from our members
Data-driven reduction strategies for Bayesian inverse problems
A persistent central challenge in computational science and engineering (CSE), with both national and global security implications, is the efficient solution of large-scale Bayesian inverse problems. These problems range from estimating material parameters in subsurface simulations to estimating phenomenological parameters in climate models. Despite recent progress, our ability to quantify uncertainties and solve large-scale inverse problems lags well behind our ability to develop the governing forward simulations.
Inverse problems present unique computational challenges that are only magnified as we include larger observational data sets and demand higher-resolution parameter estimates. Even with the current state-of-the-art, solving deterministic large-scale inverse problems is prohibitively expensive. Large-scale uncertainty quantification (UQ), cast in the Bayesian inversion framework, is thus rendered intractable. To conquer these challenges, new methods that target the root causes of computational complexity are needed.
In this dissertation, we propose data-driven strategies for overcoming this “curse of di- mensionality.” First, we address the computational complexity induced in large-scale inverse problems by high-dimensional observational data. We propose a randomized misfit approach
(RMA), which uses random projections—quasi-orthogonal, information-preserving transformations—to map the high-dimensional data-misfit vector to a low-dimensional space. We provide the first theoretical explanation for why randomized misfit methods are successful in practice with a small reduced data-misfit dimension (n = O(1)).
Next, we develop the randomized geostatistical approach (RGA) for Bayesian sub- surface inverse problems with high-dimensional data. We show that the RGA is able to resolve transient groundwater inverse problems with noisy observed data dimensions up to 107, whereas a comparison method fails due to out-of-memory errors.
Finally, we address the solution of Bayesian inverse problems with spatially localized data. The motivation is CSE applications that would gain from high-fidelity estimation over a smaller data-local domain, versus expensive and uncertain estimation over the full simulation domain. We propose several truncated domain inversion methods using domain decomposition theory to build model-informed artificial boundary conditions. Numerical investigations of MAP estimation and sampling demonstrate improved fidelity and fewer partial differential equation (PDE) solves with our truncated methods.Computational Science, Engineering, and Mathematic
Robust Algorithms for Low-Rank and Sparse Matrix Models
Data in statistical signal processing problems is often inherently matrix-valued, and a natural first step in working with such data is to impose a model with structure that captures the distinctive features of the underlying data. Under the right model, one can design algorithms that can reliably tease weak signals out of highly corrupted data. In this thesis, we study two important classes of matrix structure: low-rankness and sparsity. In particular, we focus on robust principal component analysis (PCA) models that decompose data into the sum of low-rank and sparse (in an appropriate sense) components. Robust PCA models are popular because they are useful models for data in practice and because efficient algorithms exist for solving them.
This thesis focuses on developing new robust PCA algorithms that advance the state-of-the-art in several key respects. First, we develop a theoretical understanding of the effect of outliers on PCA and the extent to which one can reliably reject outliers from corrupted data using thresholding schemes. We apply these insights and other recent results from low-rank matrix estimation to design robust PCA algorithms with improved low-rank models that are well-suited for processing highly corrupted data. On the sparse modeling front, we use sparse signal models like spatial continuity and dictionary learning to develop new methods with important adaptive representational capabilities. We also propose efficient algorithms for implementing our methods, including an extension of our dictionary learning algorithms to the online or sequential data setting. The underlying theme of our work is to combine ideas from low-rank and sparse modeling in novel ways to design robust algorithms that produce accurate reconstructions from highly undersampled or corrupted data. We consider a variety of application domains for our methods, including foreground-background separation, photometric stereo, and inverse problems such as video inpainting and dynamic magnetic resonance imaging.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143925/1/brimoor_1.pd
Recommended from our members
Learning-based Optimization for Signal and Image Processing
Incorporating machine learning techniques into optimization problems and solvers attracts increasing attention. Given a particular type of optimization problem that needs to be solved repeatedly, machine learning techniques can find some features for this category of optimization and develop algorithms with excellent performance. This thesis deals with algorithms and convergence analysis in learning-based optimization in three aspects: learning dictionaries, learning optimization solvers and learning regularizers.Learning dictionaries for sparse coding is significant for signal processing. Convolutional sparse coding is a form of sparse coding with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in the batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage, and severely limits the training data size that can be used. I proposed two online convolutional dictionary learning algorithms that offered far better scaling of memory and computational cost than batch methods and provided a rigorous theoretical analysis of these methods.Learning fast solvers for optimization is a rising research topic. In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. I studied unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery and established its convergence. Based on the properties of parameters required by convergence, the model can be significantly simplified and, consequently, has much less training cost and better recovery performance.Learning regularizers or priors improves the performance of optimization solvers, especially for signal and image processing tasks. Plug-and-play (PnP) is a non-convex framework that integrates modern priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this thesis, the theoretical convergence of PnP-FBS and PnP-ADMM was established, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. Furthermore, real spectral normalization was proposed for training deep learning-based denoisers to satisfy the proposed Lipschitz condition
Low-rank and sparse reconstruction in dynamic magnetic resonance imaging via proximal splitting methods
Dynamic magnetic resonance imaging (MRI) consists of collecting multiple MR images in time, resulting in a spatio-temporal signal. However, MRI intrinsically suffers from long acquisition times due to various constraints. This limits the full potential of dynamic MR imaging, such as obtaining high spatial and temporal resolutions which are crucial to observe dynamic phenomena. This dissertation addresses the problem of the reconstruction of dynamic MR images from a limited amount of samples arising from a nuclear magnetic resonance experiment. The term limited can be explained by the approach taken in this thesis to speed up scan time, which is based on violating the Nyquist criterion by skipping measurements that would be normally acquired in a standard MRI procedure. The resulting problem can be classified in the general framework of linear ill-posed inverse problems. This thesis shows how low-dimensional signal models, specifically lowrank and sparsity, can help in the reconstruction of dynamic images from partial measurements. The use of these models are justified by significant developments in signal recovery techniques from partial data that have emerged in recent years in signal processing. The major contributions of this thesis are the development and characterisation of fast and efficient computational tools using convex low-rank and sparse constraints via proximal gradient methods, the development and characterisation of a novel joint reconstruction–separation method via the low-rank plus sparse matrix decomposition technique, and the development and characterisation of low-rank based recovery methods in the context of dynamic parallel MRI. Finally, an additional contribution of this thesis is to formulate the various MR image reconstruction problems in the context of convex optimisation to develop algorithms based on proximal splitting methods