9 research outputs found

    Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images

    Full text link
    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address the these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient

    Deep feature fusion via two-stream convolutional neural network for hyperspectral image classification

    Get PDF
    The representation power of convolutional neural network (CNN) models for hyperspectral image (HSI) analysis is in practice limited by the available amount of the labeled samples, which is often insufficient to sustain deep networks with many parameters. We propose a novel approach to boost the network representation power with a two-stream 2-D CNN architecture. The proposed method extracts simultaneously, the spectral features and local spatial and global spatial features, with two 2-D CNN networks and makes use of channel correlations to identify the most informative features. Moreover, we propose a layer-specific regularization and a smooth normalization fusion scheme to adaptively learn the fusion weights for the spectral-spatial features from the two parallel streams. An important asset of our model is the simultaneous training of the feature extraction, fusion, and classification processes with the same cost function. Experimental results on several hyperspectral data sets demonstrate the efficacy of the proposed method compared with the state-of-the-art methods in the field

    Generic Online Learning for Partial Visible & Dynamic Environment with Delayed Feedback

    Get PDF
    Reinforcement learning (RL) has been applied to robotics and many other domains which a system must learn in real-time and interact with a dynamic environment. In most studies the state- action space that is the key part of RL is predefined. Integration of RL with deep learning method has however taken a tremendous leap forward to solve novel challenging problems such as mastering a board game of Go. The surrounding environment to the agent may not be fully visible, the environment can change over time, and the feedbacks that agent receives for its actions can have a fluctuating delay. In this paper, we propose a Generic Online Learning (GOL) system for such environments. GOL is based on RL with a hierarchical structure to form abstract features in time and adapt to the optimal solutions. The proposed method has been applied to load balancing in 5G cloud random access networks. Simulation results show that GOL successfully achieves the system objectives of reducing cache-misses and communication load, while incurring only limited system overhead in terms of number of high-level patterns needed. We believe that the proposed GOL architecture is significant for future online learning of dynamic, partially visible environments, and would be very useful for many autonomous control systems

    A Hybrid DBN and CRF Model for Spectral-Spatial Classification of Hyperspectral Images

    Full text link

    Spectral-Spatial Neural Networks and Probabilistic Graph Models for Hyperspectral Image Classification

    Get PDF
    Pixel-wise hyperspectral image (HSI) classification has been actively studied since it shares similar characteristics with related computer vision tasks, including image classification, object detection, and semantic segmentation, but also possesses inherent differences. The research surrounding HSI classification sheds light on an approach to bridge computer vision and remote sensing. Modern deep neural networks dominate and repeatedly set new records in all image recognition challenges, largely due to their excellence in extracting discriminative features through multi-layer nonlinear transformation. However, three challenges hinder the direct adoption of convolutional neural networks (CNNs) for HSI classification. First, typical HSIs contain hundreds of spectral channels that encode abundant pixel-wise spectral information, leading to the curse of dimensionality. Second, HSIs usually have relatively small numbers of annotated pixels for training along with large numbers of unlabeled pixels, resulting in the problem of generalization. Third, the scarcity of annotations and the complexity of HSI data induce noisy classification maps, which are a common issue in various types of remotely sensed data interpretation. Recent studies show that taking the data attributes into the designing of fundamental components of deep neural networks can improve their representational capacity and then facilitates these models to achieve better recognition performance. To the best of our knowledge, no research has exploited this finding or proposed corresponding models for supervised HSI classification given enough labeled HSI data. In cases of limited labeled HSI samples for training, conditional random fields (CRFs) are an effective graph model to impose data-agnostic constraints upon the intermediate outputs of trained discriminators. Although CRFs have been widely used to enhance HSI classification performance, the integration of deep learning and probabilistic graph models in the framework of semi-supervised learning remains an open question. To this end, this thesis presents supervised spectral-spatial residual networks (SSRNs) and semi-supervised generative adversarial network (GAN) -based models that account for the characteristics of HSIs and make three main contributions. First, spectral and spatial convolution layers are introduced to learn representative HSI features for supervised learning models. Second, generative adversarial networks (GANs) composed of spectral/spatial convolution and transposed-convolution layers are proposed to take advantage of adversarial training using limited amounts of labeled data for semi-supervised learning. Third, fully-connected CRFs are adopted to impose smoothness constraints on the predictions of the trained discriminators of GANs to enhance HSI classification performance. Empirical evidence acquired by experimental comparison to state-of-the-art models validates the effectiveness and generalizability of SSRN, SS-GAN, and GAN-CRF models

    Spectral-Spatial Neural Networks and Probabilistic Graph Models for Hyperspectral Image Classification

    Get PDF
    Pixel-wise hyperspectral image (HSI) classification has been actively studied since it shares similar characteristics with related computer vision tasks, including image classification, object detection, and semantic segmentation, but also possesses inherent differences. The research surrounding HSI classification sheds light on an approach to bridge computer vision and remote sensing. Modern deep neural networks dominate and repeatedly set new records in all image recognition challenges, largely due to their excellence in extracting discriminative features through multi-layer nonlinear transformation. However, three challenges hinder the direct adoption of convolutional neural networks (CNNs) for HSI classification. First, typical HSIs contain hundreds of spectral channels that encode abundant pixel-wise spectral information, leading to the curse of dimensionality. Second, HSIs usually have relatively small numbers of annotated pixels for training along with large numbers of unlabeled pixels, resulting in the problem of generalization. Third, the scarcity of annotations and the complexity of HSI data induce noisy classification maps, which are a common issue in various types of remotely sensed data interpretation. Recent studies show that taking the data attributes into the designing of fundamental components of deep neural networks can improve their representational capacity and then facilitates these models to achieve better recognition performance. To the best of our knowledge, no research has exploited this finding or proposed corresponding models for supervised HSI classification given enough labeled HSI data. In cases of limited labeled HSI samples for training, conditional random fields (CRFs) are an effective graph model to impose data-agnostic constraints upon the intermediate outputs of trained discriminators. Although CRFs have been widely used to enhance HSI classification performance, the integration of deep learning and probabilistic graph models in the framework of semi-supervised learning remains an open question. To this end, this thesis presents supervised spectral-spatial residual networks (SSRNs) and semi-supervised generative adversarial network (GAN) -based models that account for the characteristics of HSIs and make three main contributions. First, spectral and spatial convolution layers are introduced to learn representative HSI features for supervised learning models. Second, generative adversarial networks (GANs) composed of spectral/spatial convolution and transposed-convolution layers are proposed to take advantage of adversarial training using limited amounts of labeled data for semi-supervised learning. Third, fully-connected CRFs are adopted to impose smoothness constraints on the predictions of the trained discriminators of GANs to enhance HSI classification performance. Empirical evidence acquired by experimental comparison to state-of-the-art models validates the effectiveness and generalizability of SSRN, SS-GAN, and GAN-CRF models
    corecore