16 research outputs found

    Content-Based Hyperspectral Image Compression Using a Multi-Depth Weighted Map With Dynamic Receptive Field Convolution

    Get PDF
    In content-based image compression, the importance map guides the bit allocation based on its ability to represent the importance of image contents. In this paper, we improve the representational power of importance map using Squeeze-and-Excitation (SE) block, and propose multi-depth structure to reconstruct non-important channel information at low bit rates. Furthermore, Dynamic Receptive Field convolution (DRFc) is introduced to improve the ability of normal convolution to extract edge information, so as to increase the weight of edge content in the importance map and improve the reconstruction quality of edge regions. Results indicate that our proposed method can extract an importance map with clear edges and fewer artifacts so as to provide obvious advantages for bit rate allocation in content-based image compression. Compared with typical compression methods, our proposed method can greatly improve the performance of Peak Signal-to-Noise Ratio (PSNR), structural similarity (SSIM) and spectral angle (SAM) on three public datasets, and can produce a much better visual result with sharp edges and fewer artifacts. As a result, our proposed method reduces the SAM by 42.8% compared to the recently SOTA method to achieve the same low bpp (0.25) on the KAIST dataset

    Undersampled Hyperspectral Image Reconstruction Based on Surfacelet Transform

    Get PDF
    Hyperspectral imaging is a crucial technique for military and environmental monitoring. However, limited equipment hardware resources severely affect the transmission and storage of a huge amount of data for hyperspectral images. This limitation has the potentials to be solved by compressive sensing (CS), which allows reconstructing images from undersampled measurements with low error. Sparsity and incoherence are two essential requirements for CS. In this paper, we introduce surfacelet, a directional multiresolution transform for 3D data, to sparsify the hyperspectral images. Besides, a Gram-Schmidt orthogonalization is used in CS random encoding matrix, two-dimensional and three-dimensional orthogonal CS random encoding matrixes and a patch-based CS encoding scheme are designed. The proposed surfacelet-based hyperspectral images reconstruction problem is solved by a fast iterative shrinkage-thresholding algorithm. Experiments demonstrate that reconstruction of spectral lines and spatial images is significantly improved using the proposed method than using conventional three-dimensional wavelets, and growing randomness of encoding matrix can further improve the quality of hyperspectral data. Patch-based CS encoding strategy can be used to deal with large data because data in different patches can be independently sampled

    Lossless hyperspectral image compression using binary tree based decomposition

    Get PDF
    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"based coding approaches using traditional image coders (e.g. JPEG) to the "residual" based approaches using a predictive coder exploiting band-wise correlation for better compression performance. Moreover, as HS images are used in detection or classification they need to be in original form; lossy schemes can trim off uninteresting data along with compression, which can be important to specific analysis purposes. A modified lossless HS coder is required to exploit spatial- spectral redundancy using predictive residual coding. Every spectral band of an HS image can be treated like they are the individual frame of a video to impose inter band prediction. In this paper, we propose a binary tree based lossless predictive HS coding scheme that arranges the residual frame into integer residual bitmap. High spatial correlation in HS residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard HS data set, the proposed lossless predictive coding has achieved compression ratio in the range of 1.92 to 7.94. In this paper, we compare the proposed method with mainstream lossless coders (JPEG-LS and lossless HEVC). For JPEG-LS, HEVCIntra and HEVCMain, proposed technique has reduced bit-rate by 35%, 40% and 6.79% respectively by exploiting spatial correlation in predicted HS residuals

    Backdoor Attacks for Remote Sensing Data with Wavelet Transform

    Full text link
    Recent years have witnessed the great success of deep learning algorithms in the geoscience and remote sensing realm. Nevertheless, the security and robustness of deep learning models deserve special attention when addressing safety-critical remote sensing tasks. In this paper, we provide a systematic analysis of backdoor attacks for remote sensing data, where both scene classification and semantic segmentation tasks are considered. While most of the existing backdoor attack algorithms rely on visible triggers like squared patches with well-designed patterns, we propose a novel wavelet transform-based attack (WABA) method, which can achieve invisible attacks by injecting the trigger image into the poisoned image in the low-frequency domain. In this way, the high-frequency information in the trigger image can be filtered out in the attack, resulting in stealthy data poisoning. Despite its simplicity, the proposed method can significantly cheat the current state-of-the-art deep learning models with a high attack success rate. We further analyze how different trigger images and the hyper-parameters in the wavelet transform would influence the performance of the proposed method. Extensive experiments on four benchmark remote sensing datasets demonstrate the effectiveness of the proposed method for both scene classification and semantic segmentation tasks and thus highlight the importance of designing advanced backdoor defense algorithms to address this threat in remote sensing scenarios. The code will be available online at \url{https://github.com/ndraeger/waba}

    Nonlocal tensor sparse representation and low-rank regularization for hyperspectral image compressive sensing reconstruction

    Get PDF
    Hyperspectral image compressive sensing reconstruction (HSI-CSR) is an important issue in remote sensing, and has recently been investigated increasingly by the sparsity prior based approaches. However, most of the available HSI-CSR methods consider the sparsity prior in spatial and spectral vector domains via vectorizing hyperspectral cubes along a certain dimension. Besides, in most previous works, little attention has been paid to exploiting the underlying nonlocal structure in spatial domain of the HSI. In this paper, we propose a nonlocal tensor sparse and low-rank regularization (NTSRLR) approach, which can encode essential structured sparsity of an HSI and explore its advantages for HSI-CSR task. Specifically, we study how to utilize reasonably the l1 -based sparsity of core tensor and tensor nuclear norm function as tensor sparse and low-rank regularization, respectively, to describe the nonlocal spatial-spectral correlation hidden in an HSI. To study the minimization problem of the proposed algorithm, we design a fast implementation strategy based on the alternative direction multiplier method (ADMM) technique. Experimental results on various HSI datasets verify that the proposed HSI-CSR algorithm can significantly outperform existing state-of-the-art CSR techniques for HSI recovery

    Upper Bound of Real Log Canonical Threshold of Tensor Decomposition and its Application to Bayesian Inference

    Full text link
    Tensor decomposition is now being used for data analysis, information compression, and knowledge recovery. However, the mathematical property of tensor decomposition is not yet fully clarified because it is one of singular learning machines. In this paper, we give the upper bound of its real log canonical threshold (RLCT) of the tensor decomposition by using an algebraic geometrical method and derive its Bayesian generalization error theoretically. We also give considerations about its mathematical property through numerical experiments

    Robust Manifold Nonnegative Tucker Factorization for Tensor Data Representation

    Full text link
    Nonnegative Tucker Factorization (NTF) minimizes the euclidean distance or Kullback-Leibler divergence between the original data and its low-rank approximation which often suffers from grossly corruptions or outliers and the neglect of manifold structures of data. In particular, NTF suffers from rotational ambiguity, whose solutions with and without rotation transformations are equally in the sense of yielding the maximum likelihood. In this paper, we propose three Robust Manifold NTF algorithms to handle outliers by incorporating structural knowledge about the outliers. They first applies a half-quadratic optimization algorithm to transform the problem into a general weighted NTF where the weights are influenced by the outliers. Then, we introduce the correntropy induced metric, Huber function and Cauchy function for weights respectively, to handle the outliers. Finally, we introduce a manifold regularization to overcome the rotational ambiguity of NTF. We have compared the proposed method with a number of representative references covering major branches of NTF on a variety of real-world image databases. Experimental results illustrate the effectiveness of the proposed method under two evaluation metrics (accuracy and nmi)

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin
    corecore