12,396 research outputs found
Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition
Hyperspectral images (HSIs) are often corrupted by a mixture of several types
of noise during the acquisition process, e.g., Gaussian noise, impulse noise,
dead lines, stripes, and many others. Such complex noise could degrade the
quality of the acquired HSIs, limiting the precision of the subsequent
processing. In this paper, we present a novel tensor-based HSI restoration
approach by fully identifying the intrinsic structures of the clean HSI part
and the mixed noise part respectively. Specifically, for the clean HSI part, we
use tensor Tucker decomposition to describe the global correlation among all
bands, and an anisotropic spatial-spectral total variation (SSTV)
regularization to characterize the piecewise smooth structure in both spatial
and spectral domains. For the mixed noise part, we adopt the norm
regularization to detect the sparse noise, including stripes, impulse noise,
and dead pixels. Despite that TV regulariztion has the ability of removing
Gaussian noise, the Frobenius norm term is further used to model heavy Gaussian
noise for some real-world scenarios. Then, we develop an efficient algorithm
for solving the resulting optimization problem by using the augmented Lagrange
multiplier (ALM) method. Finally, extensive experiments on simulated and
real-world noise HSIs are carried out to demonstrate the superiority of the
proposed method over the existing state-of-the-art ones.Comment: 15 pages, 20 figure
Robust regularized singular value decomposition with application to mortality data
We develop a robust regularized singular value decomposition (RobRSVD) method
for analyzing two-way functional data. The research is motivated by the
application of modeling human mortality as a smooth two-way function of age
group and year. The RobRSVD is formulated as a penalized loss minimization
problem where a robust loss function is used to measure the reconstruction
error of a low-rank matrix approximation of the data, and an appropriately
defined two-way roughness penalty function is used to ensure smoothness along
each of the two functional domains. By viewing the minimization problem as two
conditional regularized robust regressions, we develop a fast iterative
reweighted least squares algorithm to implement the method. Our implementation
naturally incorporates missing values. Furthermore, our formulation allows
rigorous derivation of leave-one-row/column-out cross-validation and
generalized cross-validation criteria, which enable computationally efficient
data-driven penalty parameter selection. The advantages of the new robust
method over nonrobust ones are shown via extensive simulation studies and the
mortality rate application.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS649 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage
The truncated singular value decomposition (SVD) of the measurement matrix is
the optimal solution to the_representation_ problem of how to best approximate
a noisy measurement matrix using a low-rank matrix. Here, we consider the
(unobservable)_denoising_ problem of how to best approximate a low-rank signal
matrix buried in noise by optimal (re)weighting of the singular vectors of the
measurement matrix. We exploit recent results from random matrix theory to
exactly characterize the large matrix limit of the optimal weighting
coefficients and show that they can be computed directly from data for a large
class of noise models that includes the i.i.d. Gaussian noise case.
Our analysis brings into sharp focus the shrinkage-and-thresholding form of
the optimal weights, the non-convex nature of the associated shrinkage function
(on the singular values) and explains why matrix regularization via singular
value thresholding with convex penalty functions (such as the nuclear norm)
will always be suboptimal. We validate our theoretical predictions with
numerical simulations, develop an implementable algorithm (OptShrink) that
realizes the predicted performance gains and show how our methods can be used
to improve estimation in the setting where the measured matrix has missing
entries.Comment: Published version. The algorithm can be downloaded from
http://www.eecs.umich.edu/~rajnrao/optshrin
Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm
We show that matrix completion with trace-norm regularization can be
significantly hurt when entries of the matrix are sampled non-uniformly. We
introduce a weighted version of the trace-norm regularizer that works well also
with non-uniform sampling. Our experimental results demonstrate that the
weighted trace-norm regularization indeed yields significant gains on the
(highly non-uniformly sampled) Netflix dataset.Comment: 9 page
- …