2,929 research outputs found
Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition
Hyperspectral images (HSIs) are often corrupted by a mixture of several types
of noise during the acquisition process, e.g., Gaussian noise, impulse noise,
dead lines, stripes, and many others. Such complex noise could degrade the
quality of the acquired HSIs, limiting the precision of the subsequent
processing. In this paper, we present a novel tensor-based HSI restoration
approach by fully identifying the intrinsic structures of the clean HSI part
and the mixed noise part respectively. Specifically, for the clean HSI part, we
use tensor Tucker decomposition to describe the global correlation among all
bands, and an anisotropic spatial-spectral total variation (SSTV)
regularization to characterize the piecewise smooth structure in both spatial
and spectral domains. For the mixed noise part, we adopt the norm
regularization to detect the sparse noise, including stripes, impulse noise,
and dead pixels. Despite that TV regulariztion has the ability of removing
Gaussian noise, the Frobenius norm term is further used to model heavy Gaussian
noise for some real-world scenarios. Then, we develop an efficient algorithm
for solving the resulting optimization problem by using the augmented Lagrange
multiplier (ALM) method. Finally, extensive experiments on simulated and
real-world noise HSIs are carried out to demonstrate the superiority of the
proposed method over the existing state-of-the-art ones.Comment: 15 pages, 20 figure
Wavelet Integrated CNNs for Noise-Robust Image Classification
Convolutional Neural Networks (CNNs) are generally prone to noise
interruptions, i.e., small image noise can cause drastic changes in the output.
To suppress the noise effect to the final predication, we enhance CNNs by
replacing max-pooling, strided-convolution, and average-pooling with Discrete
Wavelet Transform (DWT). We present general DWT and Inverse DWT (IDWT) layers
applicable to various wavelets like Haar, Daubechies, and Cohen, etc., and
design wavelet integrated CNNs (WaveCNets) using these layers for image
classification. In WaveCNets, feature maps are decomposed into the
low-frequency and high-frequency components during the down-sampling. The
low-frequency component stores main information including the basic object
structures, which is transmitted into the subsequent layers to extract robust
high-level features. The high-frequency components, containing most of the data
noise, are dropped during inference to improve the noise-robustness of the
WaveCNets. Our experimental results on ImageNet and ImageNet-C (the noisy
version of ImageNet) show that WaveCNets, the wavelet integrated versions of
VGG, ResNets, and DenseNet, achieve higher accuracy and better noise-robustness
than their vanilla versions.Comment: CVPR accepted pape
Convolutional Sparse Representations with Gradient Penalties
While convolutional sparse representations enjoy a number of useful
properties, they have received limited attention for image reconstruction
problems. The present paper compares the performance of block-based and
convolutional sparse representations in the removal of Gaussian white noise.
While the usual formulation of the convolutional sparse coding problem is
slightly inferior to the block-based representations in this problem, the
performance of the convolutional form can be boosted beyond that of the
block-based form by the inclusion of suitable penalties on the gradients of the
coefficient maps
Image Restoration Using Joint Statistical Modeling in Space-Transform Domain
This paper presents a novel strategy for high-fidelity image restoration by
characterizing both local smoothness and nonlocal self-similarity of natural
images in a unified statistical manner. The main contributions are three-folds.
First, from the perspective of image statistics, a joint statistical modeling
(JSM) in an adaptive hybrid space-transform domain is established, which offers
a powerful mechanism of combining local smoothness and nonlocal self-similarity
simultaneously to ensure a more reliable and robust estimation. Second, a new
form of minimization functional for solving image inverse problem is formulated
using JSM under regularization-based framework. Finally, in order to make JSM
tractable and robust, a new Split-Bregman based algorithm is developed to
efficiently solve the above severely underdetermined inverse problem associated
with theoretical proof of convergence. Extensive experiments on image
inpainting, image deblurring and mixed Gaussian plus salt-and-pepper noise
removal applications verify the effectiveness of the proposed algorithm.Comment: 14 pages, 18 figures, 7 Tables, to be published in IEEE Transactions
on Circuits System and Video Technology (TCSVT). High resolution pdf version
and Code can be found at: http://idm.pku.edu.cn/staff/zhangjian/IRJSM
Bayesian Estimation for Continuous-Time Sparse Stochastic Processes
We consider continuous-time sparse stochastic processes from which we have
only a finite number of noisy/noiseless samples. Our goal is to estimate the
noiseless samples (denoising) and the signal in-between (interpolation
problem).
By relying on tools from the theory of splines, we derive the joint a priori
distribution of the samples and show how this probability density function can
be factorized. The factorization enables us to tractably implement the maximum
a posteriori and minimum mean-square error (MMSE) criteria as two statistical
approaches for estimating the unknowns. We compare the derived statistical
methods with well-known techniques for the recovery of sparse signals, such as
the norm and Log (- relaxation) regularization
methods. The simulation results show that, under certain conditions, the
performance of the regularization techniques can be very close to that of the
MMSE estimator.Comment: To appear in IEEE TS
- …