2,483 research outputs found
Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding
Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties
Exploiting Structural Complexity for Robust and Rapid Hyperspectral Imaging
This paper presents several strategies for spectral de-noising of
hyperspectral images and hypercube reconstruction from a limited number of
tomographic measurements. In particular we show that the non-noisy spectral
data, when stacked across the spectral dimension, exhibits low-rank. On the
other hand, under the same representation, the spectral noise exhibits a banded
structure. Motivated by this we show that the de-noised spectral data and the
unknown spectral noise and the respective bands can be simultaneously estimated
through the use of a low-rank and simultaneous sparse minimization operation
without prior knowledge of the noisy bands. This result is novel for for
hyperspectral imaging applications. In addition, we show that imaging for the
Computed Tomography Imaging Systems (CTIS) can be improved under limited angle
tomography by using low-rank penalization. For both of these cases we exploit
the recent results in the theory of low-rank matrix completion using nuclear
norm minimization
Compressive Hyperspectral Imaging Using Progressive Total Variation
Compressed Sensing (CS) is suitable for remote acquisition of hyperspectral
images for earth observation, since it could exploit the strong spatial and
spectral correlations, llowing to simplify the architecture of the onboard
sensors. Solutions proposed so far tend to decouple spatial and spectral
dimensions to reduce the complexity of the reconstruction, not taking into
account that onboard sensors progressively acquire spectral rows rather than
acquiring spectral channels. For this reason, we propose a novel progressive CS
architecture based on separate sensing of spectral rows and joint
reconstruction employing Total Variation. Experimental results run on raw
AVIRIS and AIRS images confirm the validity of the proposed system.Comment: To be published on ICASSP 2014 proceeding
Randomized Tensor Ring Decomposition and Its Application to Large-scale Data Reconstruction
Dimensionality reduction is an essential technique for multi-way large-scale
data, i.e., tensor. Tensor ring (TR) decomposition has become popular due to
its high representation ability and flexibility. However, the traditional TR
decomposition algorithms suffer from high computational cost when facing
large-scale data. In this paper, taking advantages of the recently proposed
tensor random projection method, we propose two TR decomposition algorithms. By
employing random projection on every mode of the large-scale tensor, the TR
decomposition can be processed at a much smaller scale. The simulation
experiment shows that the proposed algorithms are times faster than
traditional algorithms without loss of accuracy, and our algorithms show
superior performance in deep learning dataset compression and hyperspectral
image reconstruction experiments compared to other randomized algorithms.Comment: ICASSP submissio
An Optimal HSI Image Compression using DWT and CP
The compression of hyperspectral images (HSIs) has recently become a very attractive issue for remote sensing applications because of their volumetric data. An efficient method for hyperspectral image compression is presented. The proposed algorithm, based on Discrete Wavelet Transform and CANDECOM/PARAFAC (DWT-CP), exploits both the spectral and the spatial information in the images. The core idea behind our proposed technique is to apply CP on the DWT coefficients of spectral bands of HSIs. We use DWT to effectively separate HSIs into different sub-images and CP to efficiently compact the energy of sub-images. We evaluate the effect of the proposed method on real HSIs and also compare the results with the well-known compression methods. The obtained results show a better performance when comparing with the existing method PCA with JPEG 2000 and 3D SPECK.DOI:http://dx.doi.org/10.11591/ijece.v4i3.6326
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
- …