2,986 research outputs found

    Quality criteria benchmark for hyperspectral imagery

    Get PDF
    Hyperspectral data appear to be of a growing interest over the past few years. However, applications for hyperspectral data are still in their infancy as handling the significant size of the data presents a challenge for the user community. Efficient compression techniques are required, and lossy compression, specifically, will have a role to play, provided its impact on remote sensing applications remains insignificant. To assess the data quality, suitable distortion measures relevant to end-user applications are required. Quality criteria are also of a major interest for the conception and development of new sensors to define their requirements and specifications. This paper proposes a method to evaluate quality criteria in the context of hyperspectral images. The purpose is to provide quality criteria relevant to the impact of degradations on several classification applications. Different quality criteria are considered. Some are traditionnally used in image and video coding and are adapted here to hyperspectral images. Others are specific to hyperspectral data.We also propose the adaptation of two advanced criteria in the presence of different simulated degradations on AVIRIS hyperspectral images. Finally, five criteria are selected to give an accurate representation of the nature and the level of the degradation affecting hyperspectral data

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    A novel semi-fragile forensic watermarking scheme for remote sensing images

    Get PDF
    Peer-reviewedA semi-fragile watermarking scheme for multiple band images is presented. We propose to embed a mark into remote sensing images applying a tree structured vector quantization approach to the pixel signatures, instead of processing each band separately. The signature of themmultispectral or hyperspectral image is used to embed the mark in it order to detect any significant modification of the original image. The image is segmented into threedimensional blocks and a tree structured vector quantizer is built for each block. These trees are manipulated using an iterative algorithm until the resulting block satisfies a required criterion which establishes the embedded mark. The method is shown to be able to preserve the mark under lossy compression (above a given threshold) but, at the same time, it detects possibly forged blocks and their position in the whole image.Se presenta un esquema de marcas de agua semi-frágiles para múltiples imágenes de banda. Proponemos incorporar una marca en imágenes de detección remota, aplicando un enfoque de cuantización del vector de árbol estructurado con las definiciones de píxel, en lugar de procesar cada banda por separado. La firma de la imagen hiperespectral se utiliza para insertar la marca en el mismo orden para detectar cualquier modificación significativa de la imagen original. La imagen es segmentada en bloques tridimensionales y un cuantificador de vector de estructura de árbol se construye para cada bloque. Estos árboles son manipulados utilizando un algoritmo iteractivo hasta que el bloque resultante satisface un criterio necesario que establece la marca incrustada. El método se muestra para poder preservar la marca bajo compresión con pérdida (por encima de un umbral establecido) pero, al mismo tiempo, detecta posiblemente bloques forjados y su posición en la imagen entera.Es presenta un esquema de marques d'aigua semi-fràgils per a múltiples imatges de banda. Proposem incorporar una marca en imatges de detecció remota, aplicant un enfocament de quantització del vector d'arbre estructurat amb les definicions de píxel, en lloc de processar cada banda per separat. La signatura de la imatge hiperespectral s'utilitza per inserir la marca en el mateix ordre per detectar qualsevol modificació significativa de la imatge original. La imatge és segmentada en blocs tridimensionals i un quantificador de vector d'estructura d'arbre es construeix per a cada bloc. Aquests arbres són manipulats utilitzant un algoritme iteractiu fins que el bloc resultant satisfà un criteri necessari que estableix la marca incrustada. El mètode es mostra per poder preservar la marca sota compressió amb pèrdua (per sobre d'un llindar establert) però, al mateix temps, detecta possiblement blocs forjats i la seva posició en la imatge sencera

    Editorial Special Issue on Enhancement Algorithms, Methodologies and Technology for Spectral Sensing

    Get PDF
    The paper is an editorial issue on enhancement algorithms, methodologies and technology for spectral sensing and serves as a valuable and useful reference for researchers and technologists interested in the evolving state-of-the-art and/or the emerging science and technology base associated with spectral-based sensing and monitoring problem. This issue is particularly relevant to those seeking new and improved solutions for detecting chemical, biological, radiological and explosive threats on the land, sea, and in the air

    JP3D compression of solar data-cubes: photospheric imaging and spectropolarimetry

    Full text link
    Hyperspectral imaging is an ubiquitous technique in solar physics observations and the recent advances in solar instrumentation enabled us to acquire and record data at an unprecedented rate. The huge amount of data which will be archived in the upcoming solar observatories press us to compress the data in order to reduce the storage space and transfer times. The correlation present over all dimensions, spatial, temporal and spectral, of solar data-sets suggests the use of a 3D base wavelet decomposition, to achieve higher compression rates. In this work, we evaluate the performance of the recent JPEG2000 Part 10 standard, known as JP3D, for the lossless compression of several types of solar data-cubes. We explore the differences in: a) The compressibility of broad-band or narrow-band time-sequence; I or V stokes profiles in spectropolarimetric data-sets; b) Compressing data in [x,y,λ\lambda] packages at different times or data in [x,y,t] packages of different wavelength; c) Compressing a single large data-cube or several smaller data-cubes; d) Compressing data which is under-sampled or super-sampled with respect to the diffraction cut-off

    Graph Laplacian for Image Anomaly Detection

    Get PDF
    Reed-Xiaoli detector (RXD) is recognized as the benchmark algorithm for image anomaly detection; however, it presents known limitations, namely the dependence over the image following a multivariate Gaussian model, the estimation and inversion of a high-dimensional covariance matrix, and the inability to effectively include spatial awareness in its evaluation. In this work, a novel graph-based solution to the image anomaly detection problem is proposed; leveraging the graph Fourier transform, we are able to overcome some of RXD's limitations while reducing computational cost at the same time. Tests over both hyperspectral and medical images, using both synthetic and real anomalies, prove the proposed technique is able to obtain significant gains over performance by other algorithms in the state of the art.Comment: Published in Machine Vision and Applications (Springer
    corecore