95 research outputs found

    Weighted Low-rank Tensor Recovery for Hyperspectral Image Restoration

    Full text link
    Hyperspectral imaging, providing abundant spatial and spectral information simultaneously, has attracted a lot of interest in recent years. Unfortunately, due to the hardware limitations, the hyperspectral image (HSI) is vulnerable to various degradations, such noises (random noise, HSI denoising), blurs (Gaussian and uniform blur, HSI deblurring), and down-sampled (both spectral and spatial downsample, HSI super-resolution). Previous HSI restoration methods are designed for one specific task only. Besides, most of them start from the 1-D vector or 2-D matrix models and cannot fully exploit the structurally spectral-spatial correlation in 3-D HSI. To overcome these limitations, in this work, we propose a unified low-rank tensor recovery model for comprehensive HSI restoration tasks, in which non-local similarity between spectral-spatial cubic and spectral correlation are simultaneously captured by 3-order tensors. Further, to improve the capability and flexibility, we formulate it as a weighted low-rank tensor recovery (WLRTR) model by treating the singular values differently, and study its analytical solution. We also consider the exclusive stripe noise in HSI as the gross error by extending WLRTR to robust principal component analysis (WLRTR-RPCA). Extensive experiments demonstrate the proposed WLRTR models consistently outperform state-of-the-arts in typical low level vision HSI tasks, including denoising, destriping, deblurring and super-resolution.Comment: 22 pages, 22 figure

    A General Model for Robust Tensor Factorization with Unknown Noise

    Full text link
    Because of the limitations of matrix factorization, such as losing spatial structure information, the concept of low-rank tensor factorization (LRTF) has been applied for the recovery of a low dimensional subspace from high dimensional visual data. The low-rank tensor recovery is generally achieved by minimizing the loss function between the observed data and the factorization representation. The loss function is designed in various forms under different noise distribution assumptions, like L1L_1 norm for Laplacian distribution and L2L_2 norm for Gaussian distribution. However, they often fail to tackle the real data which are corrupted by the noise with unknown distribution. In this paper, we propose a generalized weighted low-rank tensor factorization method (GWLRTF) integrated with the idea of noise modelling. This procedure treats the target data as high-order tensor directly and models the noise by a Mixture of Gaussians, which is called MoG GWLRTF. The parameters in the model are estimated under the EM framework and through a new developed algorithm of weighted low-rank tensor factorization. We provide two versions of the algorithm with different tensor factorization operations, i.e., CP factorization and Tucker factorization. Extensive experiments indicate the respective advantages of this two versions in different applications and also demonstrate the effectiveness of MoG GWLRTF compared with other competing methods.Comment: 13 pages, 8 figure

    Color Image and Multispectral Image Denoising Using Block Diagonal Representation

    Full text link
    Filtering images of more than one channel is challenging in terms of both efficiency and effectiveness. By grouping similar patches to utilize the self-similarity and sparse linear approximation of natural images, recent nonlocal and transform-domain methods have been widely used in color and multispectral image (MSI) denoising. Many related methods focus on the modeling of group level correlation to enhance sparsity, which often resorts to a recursive strategy with a large number of similar patches. The importance of the patch level representation is understated. In this paper, we mainly investigate the influence and potential of representation at patch level by considering a general formulation with block diagonal matrix. We further show that by training a proper global patch basis, along with a local principal component analysis transform in the grouping dimension, a simple transform-threshold-inverse method could produce very competitive results. Fast implementation is also developed to reduce computational complexity. Extensive experiments on both simulated and real datasets demonstrate its robustness, effectiveness and efficiency

    Denoising and Completion of 3D Data via Multidimensional Dictionary Learning

    Full text link
    In this paper a new dictionary learning algorithm for multidimensional data is proposed. Unlike most conventional dictionary learning methods which are derived for dealing with vectors or matrices, our algorithm, named KTSVD, learns a multidimensional dictionary directly via a novel algebraic approach for tensor factorization as proposed in [3, 12, 13]. Using this approach one can define a tensor-SVD and we propose to extend K-SVD algorithm used for 1-D data to a K-TSVD algorithm for handling 2-D and 3-D data. Our algorithm, based on the idea of sparse coding (using group-sparsity over multidimensional coefficient vectors), alternates between estimating a compact representation and dictionary learning. We analyze our KTSVD algorithm and demonstrate its result on video completion and multispectral image denoising.Comment: 9 pages, submitted to Conference on Computer Vision and Pattern Recognition (CVPR) 201

    Efficient Two-Dimensional Sparse Coding Using Tensor-Linear Combination

    Full text link
    Sparse coding (SC) is an automatic feature extraction and selection technique that is widely used in unsupervised learning. However, conventional SC vectorizes the input images, which breaks apart the local proximity of pixels and destructs the elementary object structures of images. In this paper, we propose a novel two-dimensional sparse coding (2DSC) scheme that represents the input images as the tensor-linear combinations under a novel algebraic framework. 2DSC learns much more concise dictionaries because it uses the circular convolution operator, since the shifted versions of atoms learned by conventional SC are treated as the same ones. We apply 2DSC to natural images and demonstrate that 2DSC returns meaningful dictionaries for large patches. Moreover, for mutli-spectral images denoising, the proposed 2DSC reduces computational costs with competitive performance in comparison with the state-of-the-art algorithms

    SMDS-Net: Model Guided Spectral-Spatial Network for Hyperspectral Image Denoising

    Full text link
    Deep learning (DL) based hyperspectral images (HSIs) denoising approaches directly learn the nonlinear mapping between observed noisy images and underlying clean images. They normally do not consider the physical characteristics of HSIs, therefore making them lack of interpretability that is key to understand their denoising mechanism.. In order to tackle this problem, we introduce a novel model guided interpretable network for HSI denoising. Specifically, fully considering the spatial redundancy, spectral low-rankness and spectral-spatial properties of HSIs, we first establish a subspace based multi-dimensional sparse model. This model first projects the observed HSIs into a low-dimensional orthogonal subspace, and then represents the projected image with a multidimensional dictionary. After that, the model is unfolded into an end-to-end network named SMDS-Net whose fundamental modules are seamlessly connected with the denoising procedure and optimization of the model. This makes SMDS-Net convey clear physical meanings, i.e., learning the low-rankness and sparsity of HSIs. Finally, all key variables including dictionaries and thresholding parameters are obtained by the end-to-end training. Extensive experiments and comprehensive analysis confirm the denoising ability and interpretability of our method against the state-of-the-art HSI denoising methods.Comment: The experimental settings have been update

    A Low-rank Tensor Dictionary Learning Method for Multi-spectral Images Denoising

    Full text link
    As a 3-order tensor, a multi-spectral image (MSI) has dozens of spectral bands, which can deliver more information for real scenes. However, real MSIs are often corrupted by noises in the sensing process, which will further deteriorate the performance of higher-level classification and recognition tasks. In this paper, we propose a Low-rank Tensor Dictionary Learning (LTDL) method for MSI denoising. Firstly, we extract blocks from the MSI and cluster them into groups. Then instead of using the exactly low-rank model, we consider a nearly low-rank approximation, which is closer to the latent low-rank structure of the clean groups of real MSIs. In addition, we propose to learn an spatial dictionary and an spectral dictionary, which contain the spatial features and spectral features respectively of the whole MSI and are shared among different groups. Hence the LTDL method utilizes both the latent low-rank prior of each group and the correlation of different groups via the shared dictionaries. Experiments on synthetic data validate the effectiveness of dictionary learning by the LTDL. Experiments on real MSIs demonstrate the superior denoising performance of the proposed method in comparison to state-of-the-art methods

    Non-local Meets Global: An Integrated Paradigm for Hyperspectral Denoising

    Full text link
    Non-local low-rank tensor approximation has been developed as a state-of-the-art method for hyperspectral image (HSI) denoising. Unfortunately, with more spectral bands for HSI, while the running time of these methods significantly increases, their denoising performance benefits little. In this paper, we claim that the HSI underlines a global spectral low-rank subspace, and the spectral subspaces of each full band patch groups should underlie this global low-rank subspace. This motivates us to propose a unified spatial-spectral paradigm for HSI denoising. As the new model is hard to optimize, we further propose an efficient algorithm for optimization, which is motivated by alternating minimization. This is done by first learning a low-dimensional projection and the related reduced image from the noisy HSI. Then, the non-local low-rank denoising and iterative regularization are developed to refine the reduced image and projection, respectively. Finally, experiments on synthetic and both real datasets demonstrate the superiority against the other state-of-the-arts HSI denoising methods

    Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Factorization

    Get PDF
    Alpha-divergence based nonnegative tensor factorization (NTF) is applied to blind multi-spectral image (MSI) decomposition. Matrix of spectral profiles and matrix of spatial distributions of the materials resident in the image are identified from the factors in Tucker3 and PARAFAC models. NTF preserves local structure in the MSI that is lost, due to vectorization of the image, with nonnegative matrix factorization (NMF)- or independent component analysis (ICA)-based decompositions. Moreover, NTF based on PARAFAC model is unique up to permutation and scale under mild conditions. To achieve this, NMF- and ICA-based factorizations respectively require enforcement of sparseness (orthogonality) and statistical independence constraints on the spatial distributions of the materials resident in the MSI, and that is not true. We demonstrate efficiency of the NTF-based factorization in relation to NMF- and ICA-based factorizations on blind decomposition of the experimental MSI with the known ground truth

    Tensor Low Rank Modeling and Its Applications in Signal Processing

    Full text link
    Modeling of multidimensional signal using tensor is more convincing than representing it as a collection of matrices. The tensor based approaches can explore the abundant spatial and temporal structures of the mutlidimensional signal. The backbone of this modeling is the mathematical foundations of tensor algebra. The linear transform based tensor algebra furnishes low complex and high performance algebraic structures suitable for the introspection of the multidimensional signal. A comprehensive introduction of the linear transform based tensor algebra is provided from the signal processing viewpoint. The rank of a multidimensional signal is a precious property which gives an insight into the structural aspects of it. All natural multidimensional signals can be approximated to a low rank signal without losing significant information. The low rank approximation is beneficial in many signal processing applications such as denoising, missing sample estimation, resolution enhancement, classification, background estimation, object detection, deweathering, clustering and much more applications. Detailed case study of the ways and means of the low rank modeling in the above said signal processing applications are also presented
    • …
    corecore