51 research outputs found

    Classification of hyperspectral images by tensor modeling and additive morphological decomposition

    No full text
    International audiencePixel-wise classification in high-dimensional multivariate images is investigated. The proposed method deals with the joint use of spectral and spatial information provided in hyperspectral images. Additive morphological decomposition (AMD) based on morphological operators is proposed. AMD defines a scale-space decomposition for multivariate images without any loss of information. AMD is modeled as a tensor structure and tensor principal components analysis is compared as dimensional reduction algorithm versus classic approach. Experimental comparison shows that the proposed algorithm can provide better performance for the pixel classification of hyperspectral image than many other well-known techniques

    Parameters Selection Of Morphological Scale-Space Decomposition For Hyperspectral Images Using Tensor Modeling

    No full text
    International audienceDimensionality reduction (DR) using tensor structures in morphological scale-space decomposition (MSSD) for HSI has been investigated in order to incorporate spatial information in DR.We present results of a comprehensive investigation of two issues underlying DR in MSSD. Firstly, information contained in MSSD is reduced using HOSVD but its nonconvex formulation implicates that in some cases a large number of local solutions can be found. For all experiments, HOSVD always reach an unique global solution in the parameter region suitable to practical applications. Secondly, scale parameters in MSSD are presented in relation to connected components size and the influence of scale parameters in DR and subsequent classification is studied

    Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections

    Get PDF
    In the framework of multidimensional Compressed Sensing (CS), we introduce an analytical reconstruction formula that allows one to recover an NNth-order (I1×I2×⋯×IN)(I_1\times I_2\times \cdots \times I_N) data tensor X‾\underline{\mathbf{X}} from a reduced set of multi-way compressive measurements by exploiting its low multilinear-rank structure. Moreover, we show that, an interesting property of multi-way measurements allows us to build the reconstruction based on compressive linear measurements taken only in two selected modes, independently of the tensor order NN. In addition, it is proved that, in the matrix case and in a particular case with 33rd-order tensors where the same 2D sensor operator is applied to all mode-3 slices, the proposed reconstruction X‾τ\underline{\mathbf{X}}_\tau is stable in the sense that the approximation error is comparable to the one provided by the best low-multilinear-rank approximation, where τ\tau is a threshold parameter that controls the approximation error. Through the analysis of the upper bound of the approximation error we show that, in the 2D case, an optimal value for the threshold parameter τ=τ0>0\tau=\tau_0 > 0 exists, which is confirmed by our simulation results. On the other hand, our experiments on 3D datasets show that very good reconstructions are obtained using τ=0\tau=0, which means that this parameter does not need to be tuned. Our extensive simulation results demonstrate the stability and robustness of the method when it is applied to real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity based CS methods specialized for multidimensional signals is also included. A very attractive characteristic of the proposed method is that it provides a direct computation, i.e. it is non-iterative in contrast to all existing sparsity based CS algorithms, thus providing super fast computations, even for large datasets.Comment: Submitted to IEEE Transactions on Signal Processin

    Three-Way Tensor Decompositions: A Generalized Minimum Noise Subspace Based Approach

    Get PDF
    Tensor decomposition has recently become a popular method of multi-dimensional data analysis in various applications. The main interest in tensor decomposition is for dimensionality reduction, approximation or subspace purposes. However, the emergence of “big data” now gives rise to increased computational complexity for performing tensor decomposition. In this paper, motivated by the advantages of the generalized minimum noise subspace (GMNS) method, recently proposed for array processing, we proposed two algorithms for principal subspace analysis (PSA) and two algorithms for tensor decomposition using parallel factor analysis (PARAFAC) and higher-order singular value decomposition (HOSVD). The proposed decomposition algorithms can preserve several desired properties of PARAFAC and HOSVD while substantially reducing the computational complexity. Performance comparisons of PSA and tensor decomposition of our proposed algorithms against the state-of-the-art ones were studied via numerical experiments. Experimental results indicated that the proposed algorithms are of practical values

    Hyperspectral phase imaging based on denoising in complex-valued eigensubspace

    Get PDF
    A new denoising algorithm for hyperspectral complex domain data has been developed and studied. This algorithm is based on the complex domain block-matching 3D filter including the 3D Wiener filtering stage. The developed algorithm is applied and tuned to work in the singular value decomposition (SVD) eigenspace of reduced dimension. The accuracy and quantitative advantage of the new algorithm are demonstrated in simulation tests and in the processing of the experimental data. It is shown that the algorithm is effective and provides reliable results even for highly noisy data

    Mixed-Precision Random Projection for RandNLA on Tensor Cores

    Full text link
    Random projection can reduce the dimension of data while capturing its structure and is a fundamental tool for machine learning, signal processing, and information retrieval, which deal with a large amount of data today. RandNLA (Randomized Numerical Linear Algebra) leverages random projection to reduce the computational complexity of low-rank decomposition of tensors and solve least-square problems. While the computation of the random projection is a simple matrix multiplication, its asymptotic computational complexity is typically larger than other operations in a RandNLA algorithm. Therefore, various studies propose methods for reducing its computational complexity. We propose a fast mixed-precision random projection method on NVIDIA GPUs using Tensor Cores for single-precision tensors. We exploit the fact that the random matrix requires less precision, and develop a highly optimized matrix multiplication between FP32 and FP16 matrices -- SHGEMM (Single and Half-precision GEMM) -- on Tensor Cores, where the random matrix is stored in FP16. Our method can compute Randomized SVD 1.28 times faster and Random projection high order SVD 1.75 times faster than baseline single-precision implementations while maintaining accuracy.Comment: PASC'2
    • …
    corecore