22 research outputs found

    Quality criteria benchmark for hyperspectral imagery

    Get PDF
    Hyperspectral data appear to be of a growing interest over the past few years. However, applications for hyperspectral data are still in their infancy as handling the significant size of the data presents a challenge for the user community. Efficient compression techniques are required, and lossy compression, specifically, will have a role to play, provided its impact on remote sensing applications remains insignificant. To assess the data quality, suitable distortion measures relevant to end-user applications are required. Quality criteria are also of a major interest for the conception and development of new sensors to define their requirements and specifications. This paper proposes a method to evaluate quality criteria in the context of hyperspectral images. The purpose is to provide quality criteria relevant to the impact of degradations on several classification applications. Different quality criteria are considered. Some are traditionnally used in image and video coding and are adapted here to hyperspectral images. Others are specific to hyperspectral data.We also propose the adaptation of two advanced criteria in the presence of different simulated degradations on AVIRIS hyperspectral images. Finally, five criteria are selected to give an accurate representation of the nature and the level of the degradation affecting hyperspectral data

    Estimating the number of endmembers in hyperspectral images using the normal compositional model and a hierarchical Bayesian algorithm.

    Get PDF
    This paper studies a semi-supervised Bayesian unmixing algorithm for hyperspectral images. This algorithm is based on the normal compositional model recently introduced by Eismann and Stein. The normal compositional model assumes that each pixel of the image is modeled as a linear combination of an unknown number of pure materials, called endmembers. However, contrary to the classical linear mixing model, these endmembers are supposed to be random in order to model uncertainties regarding their knowledge. This paper proposes to estimate the mixture coefficients of the Normal Compositional Model (referred to as abundances) as well as their number using a reversible jump Bayesian algorithm. The performance of the proposed methodology is evaluated thanks to simulations conducted on synthetic and real AVIRIS images

    Semi-supervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery

    Get PDF
    This paper proposes a hierarchical Bayesian model that can be used for semi-supervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data

    Joint Bayesian endmember extraction and linear unmixing for hyperspectral imagery

    Get PDF
    This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for non-negativity and full-additivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Adaptive Markov random fields for joint unmixing and segmentation of hyperspectral image

    Get PDF
    Linear spectral unmixing is a challenging problem in hyperspectral imaging that consists of decomposing an observed pixel into a linear combination of pure spectra (or endmembers) with their corresponding proportions (or abundances). Endmember extraction algorithms can be employed for recovering the spectral signatures while abundances are estimated using an inversion step. Recent works have shown that exploiting spatial dependencies between image pixels can improve spectral unmixing. Markov random fields (MRF) are classically used to model these spatial correlations and partition the image into multiple classes with homogeneous abundances. This paper proposes to define the MRF sites using similarity regions. These regions are built using a self-complementary area filter that stems from the morphological theory. This kind of filter divides the original image into flat zones where the underlying pixels have the same spectral values. Once the MRF has been clearly established, a hierarchical Bayesian algorithm is proposed to estimate the abundances, the class labels, the noise variance, and the corresponding hyperparameters. A hybrid Gibbs sampler is constructed to generate samples according to the corresponding posterior distribution of the unknown parameters and hyperparameters. Simulations conducted on synthetic and real AVIRIS data demonstrate the good performance of the algorithm

    Content-Based Hyperspectral Image Compression Using a Multi-Depth Weighted Map With Dynamic Receptive Field Convolution

    Get PDF
    In content-based image compression, the importance map guides the bit allocation based on its ability to represent the importance of image contents. In this paper, we improve the representational power of importance map using Squeeze-and-Excitation (SE) block, and propose multi-depth structure to reconstruct non-important channel information at low bit rates. Furthermore, Dynamic Receptive Field convolution (DRFc) is introduced to improve the ability of normal convolution to extract edge information, so as to increase the weight of edge content in the importance map and improve the reconstruction quality of edge regions. Results indicate that our proposed method can extract an importance map with clear edges and fewer artifacts so as to provide obvious advantages for bit rate allocation in content-based image compression. Compared with typical compression methods, our proposed method can greatly improve the performance of Peak Signal-to-Noise Ratio (PSNR), structural similarity (SSIM) and spectral angle (SAM) on three public datasets, and can produce a much better visual result with sharp edges and fewer artifacts. As a result, our proposed method reduces the SAM by 42.8% compared to the recently SOTA method to achieve the same low bpp (0.25) on the KAIST dataset

    Bayesian Nonparametric Unmixing of Hyperspectral Images

    Full text link
    Hyperspectral imaging is an important tool in remote sensing, allowing for accurate analysis of vast areas. Due to a low spatial resolution, a pixel of a hyperspectral image rarely represents a single material, but rather a mixture of different spectra. HSU aims at estimating the pure spectra present in the scene of interest, referred to as endmembers, and their fractions in each pixel, referred to as abundances. Today, many HSU algorithms have been proposed, based either on a geometrical or statistical model. While most methods assume that the number of endmembers present in the scene is known, there is only little work about estimating this number from the observed data. In this work, we propose a Bayesian nonparametric framework that jointly estimates the number of endmembers, the endmembers itself, and their abundances, by making use of the Indian Buffet Process as a prior for the endmembers. Simulation results and experiments on real data demonstrate the effectiveness of the proposed algorithm, yielding results comparable with state-of-the-art methods while being able to reliably infer the number of endmembers. In scenarios with strong noise, where other algorithms provide only poor results, the proposed approach tends to overestimate the number of endmembers slightly. The additional endmembers, however, often simply represent noisy replicas of present endmembers and could easily be merged in a post-processing step

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding
    corecore