122 research outputs found

    Nonnegative tensor CP decomposition of hyperspectral data

    No full text
    International audienceNew hyperspectral missions will collect huge amounts of hyperspectral data. Besides, it is possible now to acquire time series and multiangular hyperspectral images. The process and analysis of these big data collections will require common hyperspectral techniques to be adapted or reformulated. The tensor decomposition, \textit{a.k.a.} multiway analysis, is a technique to decompose multiway arrays, that is, hypermatrices with more than two dimensions (ways). Hyperspectral time series and multiangular acquisitions can be represented as a 3-way tensor. Here, we apply Canonical Polyadic tensor decomposition techniques to the blind analysis of hyperspectral big data. In order to do so, we use a novel compression-based nonnegative CP decomposition. We show that the proposed methodology can be interpreted as multilinear blind spectral unmixing, a higher order extension of the widely known spectral unmixing. In the proposed approach, the big hyperspectral tensor is decomposed in three sets of factors which can be interpreted as spectral signatures, their spatial distribution and temporal/angular changes. We provide experimental validation using a study case of the snow coverage of the French Alps during the snow season

    Tensor Denoising via Amplification and Stable Rank Methods

    Full text link
    Tensors in the form of multilinear arrays are ubiquitous in data science applications. Captured real-world data, including video, hyperspectral images, and discretized physical systems, naturally occur as tensors and often come with attendant noise. Under the additive noise model and with the assumption that the underlying clean tensor has low rank, many denoising methods have been created that utilize tensor decomposition to effect denoising through low rank tensor approximation. However, all such decomposition methods require estimating the tensor rank, or related measures such as the tensor spectral and nuclear norms, all of which are NP-hard problems. In this work we leverage our previously developed framework of tensor amplification\textit{tensor amplification}, which provides good approximations of the spectral and nuclear tensor norms, to denoising synthetic tensors of various sizes, ranks, and noise levels, along with real-world tensors derived from physiological signals. We also introduce two new notions of tensor rank -- stable slice rank\textit{stable slice rank} and stable \textit{stable }XX-rank\textit{-rank} -- and new denoising methods based on their estimation. The experimental results show that in the low rank context, tensor-based amplification provides comparable denoising performance in high signal-to-noise ratio (SNR) settings and superior performance in noisy (i.e., low SNR) settings, while the stable XX-rank method achieves superior denoising performance on the physiological signal data

    Classification of hyperspectral images by tensor modeling and additive morphological decomposition

    No full text
    International audiencePixel-wise classification in high-dimensional multivariate images is investigated. The proposed method deals with the joint use of spectral and spatial information provided in hyperspectral images. Additive morphological decomposition (AMD) based on morphological operators is proposed. AMD defines a scale-space decomposition for multivariate images without any loss of information. AMD is modeled as a tensor structure and tensor principal components analysis is compared as dimensional reduction algorithm versus classic approach. Experimental comparison shows that the proposed algorithm can provide better performance for the pixel classification of hyperspectral image than many other well-known techniques

    An Alternating Direction Algorithm for Matrix Completion with Nonnegative Factors

    Full text link
    This paper introduces an algorithm for the nonnegative matrix factorization-and-completion problem, which aims to find nonnegative low-rank matrices X and Y so that the product XY approximates a nonnegative data matrix M whose elements are partially known (to a certain accuracy). This problem aggregates two existing problems: (i) nonnegative matrix factorization where all entries of M are given, and (ii) low-rank matrix completion where nonnegativity is not required. By taking the advantages of both nonnegativity and low-rankness, one can generally obtain superior results than those of just using one of the two properties. We propose to solve the non-convex constrained least-squares problem using an algorithm based on the classic alternating direction augmented Lagrangian method. Preliminary convergence properties of the algorithm and numerical simulation results are presented. Compared to a recent algorithm for nonnegative matrix factorization, the proposed algorithm produces factorizations of similar quality using only about half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the proposed algorithm yields overall better qualities than those produced by two recent matrix-completion algorithms that do not exploit nonnegativity
    corecore