193 research outputs found
Constrained low-tubal-rank tensor recovery for hyperspectral images mixed noise removal by bilateral random projections
In this paper, we propose a novel low-tubal-rank tensor recovery model, which
directly constrains the tubal rank prior for effectively removing the mixed
Gaussian and sparse noise in hyperspectral images. The constraints of
tubal-rank and sparsity can govern the solution of the denoised tensor in the
recovery procedure. To solve the constrained low-tubal-rank model, we develop
an iterative algorithm based on bilateral random projections to efficiently
solve the proposed model. The advantage of random projections is that the
approximation of the low-tubal-rank tensor can be obtained quite accurately in
an inexpensive manner. Experimental examples for hyperspectral image denoising
are presented to demonstrate the effectiveness and efficiency of the proposed
method.Comment: Accepted by IGARSS 201
Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition
Hyperspectral images (HSIs) are often corrupted by a mixture of several types
of noise during the acquisition process, e.g., Gaussian noise, impulse noise,
dead lines, stripes, and many others. Such complex noise could degrade the
quality of the acquired HSIs, limiting the precision of the subsequent
processing. In this paper, we present a novel tensor-based HSI restoration
approach by fully identifying the intrinsic structures of the clean HSI part
and the mixed noise part respectively. Specifically, for the clean HSI part, we
use tensor Tucker decomposition to describe the global correlation among all
bands, and an anisotropic spatial-spectral total variation (SSTV)
regularization to characterize the piecewise smooth structure in both spatial
and spectral domains. For the mixed noise part, we adopt the norm
regularization to detect the sparse noise, including stripes, impulse noise,
and dead pixels. Despite that TV regulariztion has the ability of removing
Gaussian noise, the Frobenius norm term is further used to model heavy Gaussian
noise for some real-world scenarios. Then, we develop an efficient algorithm
for solving the resulting optimization problem by using the augmented Lagrange
multiplier (ALM) method. Finally, extensive experiments on simulated and
real-world noise HSIs are carried out to demonstrate the superiority of the
proposed method over the existing state-of-the-art ones.Comment: 15 pages, 20 figure
Recovering Structured Low-rank Operators Using Nuclear Norms
This work considers the problem of recovering matrices and operators from limited and/or noisy observations. Whereas matrices result from summing tensor products of vectors, operators result from summing tensor products of matrices. These constructions lead to viewing both matrices and operators as the sum of "simple" rank-1 factors.
A popular line of work in this direction is low-rank matrix recovery, i.e., using linear measurements of a matrix to reconstruct it as the sum of few rank-1 factors. Rank minimization problems are hard in general, and a popular approach to avoid them is convex relaxation. Using the trace norm as a surrogate for rank, the low-rank matrix recovery problem becomes convex.
While the trace norm has received much attention in the literature, other convexifications are possible. This thesis focuses on the class of nuclear norms—a class that includes the trace norm itself. Much as the trace norm is a convex surrogate for the matrix rank, other nuclear norms provide convex complexity measures for additional matrix structure. Namely, nuclear norms measure the structure of the factors used to construct the matrix.
Transitioning to the operator framework allows for novel uses of nuclear norms in recovering these structured matrices. In particular, this thesis shows how to lift structured matrix factorization problems to rank-1 operator recovery problems. This new viewpoint allows nuclear norms to measure richer types of structures present in matrix factorizations.
This work also includes a Python software package to model and solve structured operator recovery problems. Systematic numerical experiments in operator denoising demonstrate the effectiveness of nuclear norms in recovering structured operators. In particular, choosing a specific nuclear norm that corresponds to the underlying factor structure of the operator improves the performance of the recovery procedures when compared, for instance, to the trace norm.
Applications in hyperspectral imaging and self-calibration demonstrate the additional flexibility gained by utilizing operator (as opposed to matrix) factorization models.</p
Tensor Robust PCA with Nonconvex and Nonlocal Regularization
Tensor robust principal component analysis (TRPCA) is a promising way for
low-rank tensor recovery, which minimizes the convex surrogate of tensor rank
by shrinking each tensor singular values equally. However, for real-world
visual data, large singular values represent more signifiant information than
small singular values. In this paper, we propose a nonconvex TRPCA (N-TRPCA)
model based on the tensor adjustable logarithmic norm. Unlike TRPCA, our
N-TRPCA can adaptively shrink small singular values more and shrink large
singular values less. In addition, TRPCA assumes that the whole data tensor is
of low rank. This assumption is hardly satisfied in practice for natural visual
data, restricting the capability of TRPCA to recover the edges and texture
details from noisy images and videos. To this end, we integrate nonlocal
self-similarity into N-TRPCA, and further develop a nonconvex and nonlocal
TRPCA (NN-TRPCA) model. Specifically, similar nonlocal patches are grouped as a
tensor and then each group tensor is recovered by our N-TRPCA. Since the
patches in one group are highly correlated, all group tensors have strong
low-rank property, leading to an improvement of recovery performance.
Experimental results demonstrate that the proposed NN-TRPCA outperforms some
existing TRPCA methods in visual data recovery. The demo code is available at
https://github.com/qguo2010/NN-TRPCA.Comment: 19 pages, 7 figure
A Constrained Convex Optimization Approach to Hyperspectral Image Restoration with Hybrid Spatio-Spectral Regularization
We propose a new constrained optimization approach to hyperspectral (HS)
image restoration. Most existing methods restore a desirable HS image by
solving some optimization problem, which consists of a regularization term(s)
and a data-fidelity term(s). The methods have to handle a regularization
term(s) and a data-fidelity term(s) simultaneously in one objective function,
and so we need to carefully control the hyperparameter(s) that balances these
terms. However, the setting of such hyperparameters is often a troublesome task
because their suitable values depend strongly on the regularization terms
adopted and the noise intensities on a given observation. Our proposed method
is formulated as a convex optimization problem, where we utilize a novel hybrid
regularization technique named Hybrid Spatio-Spectral Total Variation (HSSTV)
and incorporate data-fidelity as hard constraints. HSSTV has a strong ability
of noise and artifact removal while avoiding oversmoothing and spectral
distortion, without combining other regularizations such as low-rank
modeling-based ones. In addition, the constraint-type data-fidelity enables us
to translate the hyperparameters that balance between regularization and
data-fidelity to the upper bounds of the degree of data-fidelity that can be
set in a much easier manner. We also develop an efficient algorithm based on
the alternating direction method of multipliers (ADMM) to efficiently solve the
optimization problem. Through comprehensive experiments, we illustrate the
advantages of the proposed method over various HS image restoration methods
including state-of-the-art ones.Comment: 20 pages, 4 tables, 10 figures, submitted to MDPI Remote Sensin
- …