836 research outputs found

    State-of-the-art and gaps for deep learning on limited training data in remote sensing

    Full text link
    Deep learning usually requires big data, with respect to both volume and variety. However, most remote sensing applications only have limited training data, of which a small subset is labeled. Herein, we review three state-of-the-art approaches in deep learning to combat this challenge. The first topic is transfer learning, in which some aspects of one domain, e.g., features, are transferred to another domain. The next is unsupervised learning, e.g., autoencoders, which operate on unlabeled data. The last is generative adversarial networks, which can generate realistic looking data that can fool the likes of both a deep learning network and human. The aim of this article is to raise awareness of this dilemma, to direct the reader to existing work and to highlight current gaps that need solving.Comment: arXiv admin note: text overlap with arXiv:1709.0030

    DALF: An AI Enabled Adversarial Framework for Classification of Hyperspectral Images

    Get PDF
    Hyperspectral image classification is very complex and challenging process. However, with deep neural networks like Convolutional Neural Networks (CNN) with explicit dimensionality reduction, the capability of classifier is greatly increased. However, there is still problem with sufficient training samples. In this paper, we overcome this problem by proposing an Artificial Intelligence (AI) based framework named Deep Adversarial Learning Framework (DALF) that exploits deep autoencoder for dimensionality reduction, Generative Adversarial Network (GAN) for generating new Hyperspectral Imaging (HSI) samples that are to be verified by a discriminator in a non-cooperative game setting besides using aclassifier. Convolutional Neural Network (CNN) is used for both generator and discriminator while classifier role is played by Support Vector Machine (SVM) and Neural Network (NN). An algorithm named Generative Model based Hybrid Approach for HSI Classification (GMHA-HSIC) which drives the functionality of the proposed framework is proposed. The success of DALF in accurate classification is largely dependent on the synthesis and labelling of spectra on regular basis. The synthetic samples made with an iterative process and being verified by discriminator result in useful spectra. By training GAN with associated deep learning models, the framework leverages classification performance. Our experimental results revealed that the proposed framework has potential to improve the state of the art besides having an effective data augmentation strategy

    Open set learning with augmented category by exploiting unlabelled data (open-LACU)

    Full text link
    Considering the nature of unlabelled data, it is common for partially labelled training datasets to contain samples that belong to novel categories. Although these so-called observed novel categories exist in the training data, they do not belong to any of the training labels. In contrast, open-sets define novel categories as those unobserved during during training, but present during testing. This research is the first to generalize between observed and unobserved novel categories within a new learning policy called open-set learning with augmented category by exploiting unlabeled data or open-LACU. This study conducts a high-level review on novelty detection so to differentiate between research fields that concern observed novel categories, and the research fields that concern unobserved novel categories. Open-LACU is then introduced as a synthesis of the relevant fields to maintain the advantages of each within a single learning policy. Currently, we are finalising the first open-LACU network which will be combined with this pre-print to be sent for publication.Comment: 11 Page

    Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects

    Get PDF
    Hyperspectral Imaging (HSI) has been extensively utilized in many real-life applications because it benefits from the detailed spectral information contained in each pixel. Notably, the complex characteristics i.e., the nonlinear relation among the captured spectral information and the corresponding object of HSI data make accurate classification challenging for traditional methods. In the last few years, Deep Learning (DL) has been substantiated as a powerful feature extractor that effectively addresses the nonlinear problems that appeared in a number of computer vision tasks. This prompts the deployment of DL for HSI classification (HSIC) which revealed good performance. This survey enlists a systematic overview of DL for HSIC and compared state-of-the-art strategies of the said topic. Primarily, we will encapsulate the main challenges of traditional machine learning for HSIC and then we will acquaint the superiority of DL to address these problems. This survey breakdown the state-of-the-art DL frameworks into spectral-features, spatial-features, and together spatial-spectral features to systematically analyze the achievements (future research directions as well) of these frameworks for HSIC. Moreover, we will consider the fact that DL requires a large number of labeled training examples whereas acquiring such a number for HSIC is challenging in terms of time and cost. Therefore, this survey discusses some strategies to improve the generalization performance of DL strategies which can provide some future guidelines
    • …
    corecore