3,474 research outputs found
Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition
Hyperspectral images (HSIs) are often corrupted by a mixture of several types
of noise during the acquisition process, e.g., Gaussian noise, impulse noise,
dead lines, stripes, and many others. Such complex noise could degrade the
quality of the acquired HSIs, limiting the precision of the subsequent
processing. In this paper, we present a novel tensor-based HSI restoration
approach by fully identifying the intrinsic structures of the clean HSI part
and the mixed noise part respectively. Specifically, for the clean HSI part, we
use tensor Tucker decomposition to describe the global correlation among all
bands, and an anisotropic spatial-spectral total variation (SSTV)
regularization to characterize the piecewise smooth structure in both spatial
and spectral domains. For the mixed noise part, we adopt the norm
regularization to detect the sparse noise, including stripes, impulse noise,
and dead pixels. Despite that TV regulariztion has the ability of removing
Gaussian noise, the Frobenius norm term is further used to model heavy Gaussian
noise for some real-world scenarios. Then, we develop an efficient algorithm
for solving the resulting optimization problem by using the augmented Lagrange
multiplier (ALM) method. Finally, extensive experiments on simulated and
real-world noise HSIs are carried out to demonstrate the superiority of the
proposed method over the existing state-of-the-art ones.Comment: 15 pages, 20 figure
From Rank Estimation to Rank Approximation: Rank Residual Constraint for Image Restoration
In this paper, we propose a novel approach to the rank minimization problem,
termed rank residual constraint (RRC) model. Different from existing low-rank
based approaches, such as the well-known nuclear norm minimization (NNM) and
the weighted nuclear norm minimization (WNNM), which estimate the underlying
low-rank matrix directly from the corrupted observations, we progressively
approximate the underlying low-rank matrix via minimizing the rank residual.
Through integrating the image nonlocal self-similarity (NSS) prior with the
proposed RRC model, we apply it to image restoration tasks, including image
denoising and image compression artifacts reduction. Towards this end, we first
obtain a good reference of the original image groups by using the image NSS
prior, and then the rank residual of the image groups between this reference
and the degraded image is minimized to achieve a better estimate to the desired
image. In this manner, both the reference and the estimated image are updated
gradually and jointly in each iteration. Based on the group-based sparse
representation model, we further provide a theoretical analysis on the
feasibility of the proposed RRC model. Experimental results demonstrate that
the proposed RRC model outperforms many state-of-the-art schemes in both the
objective and perceptual quality
Kernel Belief Propagation
We propose a nonparametric generalization of belief propagation, Kernel
Belief Propagation (KBP), for pairwise Markov random fields. Messages are
represented as functions in a reproducing kernel Hilbert space (RKHS), and
message updates are simple linear operations in the RKHS. KBP makes none of the
assumptions commonly required in classical BP algorithms: the variables need
not arise from a finite domain or a Gaussian distribution, nor must their
relations take any particular parametric form. Rather, the relations between
variables are represented implicitly, and are learned nonparametrically from
training data. KBP has the advantage that it may be used on any domain where
kernels are defined (Rd, strings, groups), even where explicit parametric
models are not known, or closed form expressions for the BP updates do not
exist. The computational cost of message updates in KBP is polynomial in the
training data size. We also propose a constant time approximate message update
procedure by representing messages using a small number of basis functions. In
experiments, we apply KBP to image denoising, depth prediction from still
images, and protein configuration prediction: KBP is faster than competing
classical and nonparametric approaches (by orders of magnitude, in some cases),
while providing significantly more accurate results
- …