4,046 research outputs found

    Limitations of Principal Component Analysis for Dimensionality-Reduction for Classification of Hyperspectral Data

    Get PDF
    It is a popular practice in the remote-sensing community to apply principal component analysis (PCA) on a higher-dimensional feature space to achieve dimensionality-reduction. Several factors that have led to the popularity of PCA include its simplicity, ease of use, availability as part of popular remote-sensing packages, and optimal nature in terms of mean square error. These advantages have prompted the remote-sensing research community to overlook many limitations of PCA when used as a dimensionality-reduction tool for classification and target-detection applications. This thesis addresses the limitations of PCA when used as a dimensionality-reduction technique for extracting discriminating features from hyperspectral data. Theoretical and experimental analyses are presented to demonstrate that PCA is not necessarily an appropriate feature-extraction method for high-dimensional data when the objective is classification or target-recognition. The influence of certain data-distribution characteristics, such as within-class covariance, between-class covariance, and correlation on PCA transformation, is analyzed in this thesis. The classification accuracies obtained using PCA features are compared to accuracies obtained using other feature-extraction methods like variants of Karhunen-Loève transform and greedy search algorithms on spectral and wavelet domains. Experimental analyses are conducted for both two-class and multi-class cases. The classification accuracies obtained from higher-order PCA components are compared to the classification accuracies of features extracted from different regions of the spectrum. The comparative study done on the classification accuracies that are obtained using above feature-extraction methods, ascertain that PCA may not be an appropriate tool for dimensionality-reduction of certain hyperspectral data-distributions, when the objective is classification or target-recognition

    Fuzzy transform for high-resolution satellite images compression

    Get PDF
    Many compression methods have been developed until now, especially for very high-resolution satellites images, which, due to the massive information contained in them, need compression for a more efficient storage and transmission. This paper modifies Perfilieva's Fuzzy transform using pseudo-exponential function to compress very high-resolution satellite images. We found that very high-resolution satellite images can be compressed by F-transform with pseudo-exponential function as the membership function. The compressed images have good quality as shown by the PSNR values ranging around 59-66 dB. However, the process is quite time-consuming with average 187.1954 seconds needed to compress one image. These compressed images qualities are better than the standard compression methods such as CCSDS and Wavelet method, but still inferior regarding time consumption

    Multilevel split regression wavelet analysis for lossless compression of remote sensing data

    Get PDF
    Spectral redundancy is a key element to be exploited in compression of remote sensing data. Combined with an entropy encoder, it can achieve competitive lossless coding performance. One of the latest techniques to decorrelate the spectral signal is the regression wavelet analysis (RWA). RWA applies a wavelet transform in the spectral domain and estimates the detail coeffi- cients through the approximation coefficients using linear regres- sion. RWA was originally coupled with JPEG 2000. This letter introduces a novel coding approach, where RWA is coupled with the predictor of CCSDS-123.0-B-1 standard and a lightweight contextual arithmetic coder. In addition, we also propose a smart strategy to select the number of RWA decomposition levels that maximize the coding performance. Experimental results indicate that, on average, the obtained coding gains vary between 0.1 and 1.35 bits-per-pixel-per-component compared with the other state- of-the-art coding technique

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    Exploiting Deep Features for Remote Sensing Image Retrieval: A Systematic Investigation

    Full text link
    Remote sensing (RS) image retrieval is of great significant for geological information mining. Over the past two decades, a large amount of research on this task has been carried out, which mainly focuses on the following three core issues: feature extraction, similarity metric and relevance feedback. Due to the complexity and multiformity of ground objects in high-resolution remote sensing (HRRS) images, there is still room for improvement in the current retrieval approaches. In this paper, we analyze the three core issues of RS image retrieval and provide a comprehensive review on existing methods. Furthermore, for the goal to advance the state-of-the-art in HRRS image retrieval, we focus on the feature extraction issue and delve how to use powerful deep representations to address this task. We conduct systematic investigation on evaluating correlative factors that may affect the performance of deep features. By optimizing each factor, we acquire remarkable retrieval results on publicly available HRRS datasets. Finally, we explain the experimental phenomenon in detail and draw conclusions according to our analysis. Our work can serve as a guiding role for the research of content-based RS image retrieval

    Data compression in remote sensing applications

    Get PDF
    A survey of current data compression techniques which are being used to reduce the amount of data in remote sensing applications is provided. The survey aspect is far from complete, reflecting the substantial activity in this area. The purpose of the survey is more to exemplify the different approaches being taken rather than to provide an exhaustive list of the various proposed approaches

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Sparse representation based hyperspectral image compression and classification

    Get PDF
    Abstract This thesis presents a research work on applying sparse representation to lossy hyperspectral image compression and hyperspectral image classification. The proposed lossy hyperspectral image compression framework introduces two types of dictionaries distinguished by the terms sparse representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively. The former is learnt in the spectral domain to exploit the spectral correlations, and the latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in hyperspectral images. To alleviate the computational demand of dictionary learning, either a base dictionary trained offline or an update of the base dictionary is employed in the compression framework. The proposed compression method is evaluated in terms of different objective metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of both SRSD and MSSD approaches. For the proposed hyperspectral image classification method, we utilize the sparse coefficients for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular, the discriminative character of the sparse coefficients is enhanced by incorporating contextual information using local mean filters. The classification performance is evaluated and compared to a number of similar or representative methods. The results show that our approach could outperform other approaches based on SVM or sparse representation. This thesis makes the following contributions. It provides a relatively thorough investigation of applying sparse representation to lossy hyperspectral image compression. Specifically, it reveals the effectiveness of sparse representation for the exploitation of spectral correlations in hyperspectral images. In addition, we have shown that the discriminative character of sparse coefficients can lead to superior performance in hyperspectral image classification.EM201

    A Real Time Image Processing Subsystem: GEZGIN

    Get PDF
    In this study, a real-time image processing subsystem, GEZGIN, which is currently being developed for BILSAT-1, a 100kg class micro-satellite, is presented. BILSAT-1 is being constructed in accordance with a technology transfer agreement between TÜBITAK-BILTEN (Turkey) and SSTL (UK) and planned to be placed into a 650 km sunsynchronous orbit in Summer 2003. GEZGIN is one of the two Turkish R&D payloads to be hosted on BILSAT-1. One of the missions of BILSAT-1 is constructing a Digital Elevation Model of Turkey using both multi-spectral and panchromatic imagers. Due to limited down-link bandwidth and on-board storage capacity, employment of a realtime image compression scheme is highly advantageous for the mission. GEZGIN has evolved as an implementation to achieve image compression tasks that would lead to an efficient utilization of both the down-link and on-board storage. The image processing on GEZGIN includes capturing of 4-band multi-spectral images of size 2048x2048 8- bit pixels, compressing them simultaneously with the new industry standard JPEG2000 algorithm and forwarding the compressed multi-spectral image to Solid State Data Recorders (SSDR) of BILSAT-1 for storage and down-link transmission. The mission definition together with orbital parameters impose a 6.5 seconds constraint on real-time image compression. GEZGIN meets this constraint by exploiting the parallelism among image processing units and assigning compute intensive tasks to dedicated hardware. The proposed hardware also allows for full reconfigurability of all processing units
    corecore