279 research outputs found

    Machine Learning for Robust Understanding of Scene Materials in Hyperspectral Images

    Get PDF
    The major challenges in hyperspectral (HS) imaging and data analysis are expensive sensors, high dimensionality of the signal, limited ground truth, and spectral variability. This dissertation develops and analyzes machine learning based methods to address these problems. In the first part, we examine one of the most important HS data analysis tasks-vegetation parameter estimation. We present two Gaussian processes based approaches for improving the accuracy of vegetation parameter retrieval when ground truth is limited and/or spectral variability is high. The first is the adoption of covariance functions based on well-established metrics, such as, spectral angle and spectral correlation, which are known to be better measures of similarity for spectral data. The second is the joint modeling of related vegetation parameters by multitask Gaussian processes so that the prediction accuracy of the vegetation parameter of interest can be improved with the aid of related vegetation parameters for which a larger set of ground truth is available. The efficacy of the proposed methods is demonstrated by comparing them against state-of-the art approaches on three real-world HS datasets and one synthetic dataset. In the second part, we demonstrate how Bayesian optimization can be applied to jointly tune the different components of hyperspectral data analysis frameworks for better performance. Experimental validation on the spatial-spectral classification framework consisting of a classifier and a Markov random field is provided. In the third part, we investigate whether high dimensional HS spectra can be reconstructed from low dimensional multispectral (MS) signals, that can be obtained from much cheaper, lower spectral resolution sensors. A novel end-to-end convolutional residual neural network architecture is proposed that can simultaneously optimize both the MS bands and the transformation to reconstruct HS spectra from MS signals by analyzing a large quantity of HS data. The learned band can be implemented in sensor hardware and the learned transformation can be incorporated in the data processing pipeline to build a low-cost hyperspectral data collection system. Using a diverse set of real-world datasets, we show how the proposed approach of optimizing MS bands along with the transformation rather than just optimizing the transformation with fixed bands, as proposed by previous studies, can drastically increase the reconstruction accuracy. Additionally, we also investigate the prospects of using reconstructed HS spectra for land cover classification

    Impact of Feature Representation on Remote Sensing Image Retrieval

    Get PDF
    Remote sensing images are acquired using special platforms, sensors and are classified as aerial, multispectral and hyperspectral images. Multispectral and hyperspectral images are represented using large spectral vectors as compared to normal Red, Green, Blue (RGB) images. Hence, remote sensing image retrieval process from large archives is a challenging task.  Remote sensing image retrieval mainly consist of feature representation as first step and finding out similar images to a query image as second step. Feature representation plays important part in the performance of remote sensing image retrieval process. Research work focuses on impact of feature representation of remote sensing images on the performance of remote sensing image retrieval. This study shows that more discriminative features of remote sensing images are needed to improve performance of remote sensing image retrieval process

    X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing Data

    Get PDF
    This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods
    corecore