447 research outputs found

    GETNET: A General End-to-end Two-dimensional CNN Framework for Hyperspectral Image Change Detection

    Full text link
    Change detection (CD) is an important application of remote sensing, which provides timely change information about large-scale Earth surface. With the emergence of hyperspectral imagery, CD technology has been greatly promoted, as hyperspectral data with the highspectral resolution are capable of detecting finer changes than using the traditional multispectral imagery. Nevertheless, the high dimension of hyperspectral data makes it difficult to implement traditional CD algorithms. Besides, endmember abundance information at subpixel level is often not fully utilized. In order to better handle high dimension problem and explore abundance information, this paper presents a General End-to-end Two-dimensional CNN (GETNET) framework for hyperspectral image change detection (HSI-CD). The main contributions of this work are threefold: 1) Mixed-affinity matrix that integrates subpixel representation is introduced to mine more cross-channel gradient features and fuse multi-source information; 2) 2-D CNN is designed to learn the discriminative features effectively from multi-source data at a higher level and enhance the generalization ability of the proposed CD algorithm; 3) A new HSI-CD data set is designed for the objective comparison of different methods. Experimental results on real hyperspectral data sets demonstrate the proposed method outperforms most of the state-of-the-arts

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models

    Advances in Hyperspectral Image Classification Methods for Vegetation and Agricultural Cropland Studies

    Get PDF
    Hyperspectral data are becoming more widely available via sensors on airborne and unmanned aerial vehicle (UAV) platforms, as well as proximal platforms. While space-based hyperspectral data continue to be limited in availability, multiple spaceborne Earth-observing missions on traditional platforms are scheduled for launch, and companies are experimenting with small satellites for constellations to observe the Earth, as well as for planetary missions. Land cover mapping via classification is one of the most important applications of hyperspectral remote sensing and will increase in significance as time series of imagery are more readily available. However, while the narrow bands of hyperspectral data provide new opportunities for chemistry-based modeling and mapping, challenges remain. Hyperspectral data are high dimensional, and many bands are highly correlated or irrelevant for a given classification problem. For supervised classification methods, the quantity of training data is typically limited relative to the dimension of the input space. The resulting Hughes phenomenon, often referred to as the curse of dimensionality, increases potential for unstable parameter estimates, overfitting, and poor generalization of classifiers. This is particularly problematic for parametric approaches such as Gaussian maximum likelihoodbased classifiers that have been the backbone of pixel-based multispectral classification methods. This issue has motivated investigation of alternatives, including regularization of the class covariance matrices, ensembles of weak classifiers, development of feature selection and extraction methods, adoption of nonparametric classifiers, and exploration of methods to exploit unlabeled samples via semi-supervised and active learning. Data sets are also quite large, motivating computationally efficient algorithms and implementations. This chapter provides an overview of the recent advances in classification methods for mapping vegetation using hyperspectral data. Three data sets that are used in the hyperspectral classification literature (e.g., Botswana Hyperion satellite data and AVIRIS airborne data over both Kennedy Space Center and Indian Pines) are described in Section 3.2 and used to illustrate methods described in the chapter. An additional high-resolution hyperspectral data set acquired by a SpecTIR sensor on an airborne platform over the Indian Pines area is included to exemplify the use of new deep learning approaches, and a multiplatform example of airborne hyperspectral data is provided to demonstrate transfer learning in hyperspectral image classification. Classical approaches for supervised and unsupervised feature selection and extraction are reviewed in Section 3.3. In particular, nonlinearities exhibited in hyperspectral imagery have motivated development of nonlinear feature extraction methods in manifold learning, which are outlined in Section 3.3.1.4. Spatial context is also important in classification of both natural vegetation with complex textural patterns and large agricultural fields with significant local variability within fields. Approaches to exploit spatial features at both the pixel level (e.g., co-occurrencebased texture and extended morphological attribute profiles [EMAPs]) and integration of segmentation approaches (e.g., HSeg) are discussed in this context in Section 3.3.2. Recently, classification methods that leverage nonparametric methods originating in the machine learning community have grown in popularity. An overview of both widely used and newly emerging approaches, including support vector machines (SVMs), Gaussian mixture models, and deep learning based on convolutional neural networks is provided in Section 3.4. Strategies to exploit unlabeled samples, including active learning and metric learning, which combine feature extraction and augmentation of the pool of training samples in an active learning framework, are outlined in Section 3.5. Integration of image segmentation with classification to accommodate spatial coherence typically observed in vegetation is also explored, including as an integrated active learning system. Exploitation of multisensor strategies for augmenting the pool of training samples is investigated via a transfer learning framework in Section 3.5.1.2. Finally, we look to the future, considering opportunities soon to be provided by new paradigms, as hyperspectral sensing is becoming common at multiple scales from ground-based and airborne autonomous vehicles to manned aircraft and space-based platforms

    X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing Data

    Get PDF
    This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods

    A Novel Filter Approach for Band Selection and Classification of Hyperspectral Remotely Sensed Images Using Normalized Mutual Information and Support Vector Machines

    Full text link
    Band selection is a great challenging task in the classification of hyperspectral remotely sensed images HSI. This is resulting from its high spectral resolution, the many class outputs and the limited number of training samples. For this purpose, this paper introduces a new filter approach for dimension reduction and classification of hyperspectral images using information theoretic (normalized mutual information) and support vector machines SVM. This method consists to select a minimal subset of the most informative and relevant bands from the input datasets for better classification efficiency. We applied our proposed algorithm on two well-known benchmark datasets gathered by the NASA's AVIRIS sensor over Indiana and Salinas valley in USA. The experimental results were assessed based on different evaluation metrics widely used in this area. The comparison with the state of the art methods proves that our method could produce good performance with reduced number of selected bands in a good timing. Keywords: Dimension reduction, Hyperspectral images, Band selection, Normalized mutual information, Classification, Support vector machinesComment: http://www.scopus.com/inward/record.url?eid=2-s2.0-85056469155&partnerID=MN8TOAR

    Learning to Propagate Labels on Graphs: An Iterative Multitask Regression Framework for Semi-supervised Hyperspectral Dimensionality Reduction

    Get PDF
    Hyperspectral dimensionality reduction (HDR), an important preprocessing step prior to high-level data analysis, has been garnering growing attention in the remote sensing community. Although a variety of methods, both unsupervised and supervised models, have been proposed for this task, yet the discriminative ability in feature representation still remains limited due to the lack of a powerful tool that effectively exploits the labeled and unlabeled data in the HDR process. A semi-supervised HDR approach, called iterative multitask regression (IMR), is proposed in this paper to address this need. IMR aims at learning a low-dimensional subspace by jointly considering the labeled and unlabeled data, and also bridging the learned subspace with two regression tasks: labels and pseudo-labels initialized by a given classifier. More significantly, IMR dynamically propagates the labels on a learnable graph and progressively refines pseudo-labels, yielding a well-conditioned feedback system. Experiments conducted on three widely-used hyperspectral image datasets demonstrate that the dimension-reduced features learned by the proposed IMR framework with respect to classification or recognition accuracy are superior to those of related state-of-the-art HDR approaches

    Efficient Nonlinear Dimensionality Reduction for Pixel-wise Classification of Hyperspectral Imagery

    Get PDF
    Classification, target detection, and compression are all important tasks in analyzing hyperspectral imagery (HSI). Because of the high dimensionality of HSI, it is often useful to identify low-dimensional representations of HSI data that can be used to make analysis tasks tractable. Traditional linear dimensionality reduction (DR) methods are not adequate due to the nonlinear distribution of HSI data. Many nonlinear DR methods, which are successful in the general data processing domain, such as Local Linear Embedding (LLE) [1], Isometric Feature Mapping (ISOMAP) [2] and Kernel Principal Components Analysis (KPCA) [3], run very slowly and require large amounts of memory when applied to HSI. For example, applying KPCA to the 512×217 pixel, 204-band Salinas image using a modern desktop computer (AMD FX-6300 Six-Core Processor, 32 GB memory) requires more than 5 days of computing time and 28GB memory! In this thesis, we propose two different algorithms for significantly improving the computational efficiency of nonlinear DR without adversely affecting the performance of classification task: Simple Linear Iterative Clustering (SLIC) superpixels and semi-supervised deep autoencoder networks (SSDAN). SLIC is a very popular algorithm developed for computing superpixels in RGB images that can easily be extended to HSI. Each superpixel includes hundreds or thousands of pixels based on spatial and spectral similarities and is represented by the mean spectrum and spatial position of all of its component pixels. Since the number of superpixels is much smaller than the number of pixels in the image, they can be used as input for nonlinearDR, which significantly reduces the required computation time and memory versus providing all of the original pixels as input. After nonlinear DR is performed using superpixels as input, an interpolation step can be used to obtain the embedding of each original image pixel in the low dimensional space. To illustrate the power of using superpixels in an HSI classification pipeline,we conduct experiments on three widely used and publicly available hyperspectral images: Indian Pines, Salinas and Pavia. The experimental results for all three images demonstrate that for moderately sized superpixels, the overall accuracy of classification using superpixel-based nonlinear DR matches and sometimes exceeds the overall accuracy of classification using pixel-based nonlinear DR, with a computational speed that is two-three orders of magnitude faster. Even though superpixel-based nonlinear DR shows promise for HSI classification, it does have disadvantages. First, it is costly to perform out-of-sample extensions. Second, it does not generalize to handle other types of data that might not have spatial information. Third, the original input pixels cannot approximately be recovered, as is possible in many DR algorithms.In order to overcome these difficulties, a new autoencoder network - SSDAN is proposed. It is a fully-connected semi-supervised autoencoder network that performs nonlinear DR in a manner that enables class information to be integrated. Features learned from SSDAN will be similar to those computed via traditional nonlinear DR, and features from the same class will be close to each other. Once the network is trained well with training data, test data can be easily mapped to the low dimensional embedding. Any kind of data can be used to train a SSDAN,and the decoder portion of the SSDAN can easily recover the initial input with reasonable loss.Experimental results on pixel-based classification in the Indian Pines, Salinas and Pavia images show that SSDANs can approximate the overall accuracy of nonlinear DR while significantly improving computational efficiency. We also show that transfer learning can be use to finetune features of a trained SSDAN for a new HSI dataset. Finally, experimental results on HSI compression show a trade-off between Overall Accuracy (OA) of extracted features and PeakSignal to Noise Ratio (PSNR) of the reconstructed image

    Development and Applications of Machine Learning Methods for Hyperspectral Data

    Get PDF
    Die hyperspektrale Fernerkundung der Erde stützt sich auf Daten passiver optischer Sensoren, die auf Plattformen wie Satelliten und unbemannten Luftfahrzeugen montiert sind. Hyperspektrale Daten umfassen Informationen zur Identifizierung von Materialien und zur Überwachung von Umweltvariablen wie Bodentextur, Bodenfeuchte, Chlorophyll a und Landbedeckung. Methoden zur Datenanalyse sind erforderlich, um Informationen aus hyperspektralen Daten zu erhalten. Ein leistungsstarkes Werkzeug bei der Analyse von Hyperspektraldaten ist das Maschinelle Lernen, eine Untergruppe von Künstlicher Intelligenz. Maschinelle Lernverfahren können nichtlineare Korrelationen lösen und sind bei steigenden Datenmengen skalierbar. Jeder Datensatz und jedes maschinelle Lernverfahren bringt neue Herausforderungen mit sich, die innovative Lösungen erfordern. Das Ziel dieser Arbeit ist die Entwicklung und Anwendung von maschinellen Lernverfahren auf hyperspektrale Fernerkundungsdaten. Im Rahmen dieser Arbeit werden Studien vorgestellt, die sich mit drei wesentlichen Herausforderungen befassen: (I) Datensätze, welche nur wenige Datenpunkte mit dazugehörigen Ausgabedaten enthalten, (II) das begrenzte Potential von nicht-tiefen maschinellen Lernverfahren auf hyperspektralen Daten und (III) Unterschiede zwischen den Verteilungen der Trainings- und Testdatensätzen. Die Studien zur Herausforderung (I) führen zur Entwicklung und Veröffentlichung eines Frameworks von Selbstorganisierten Karten (SOMs) für unüberwachtes, überwachtes und teilüberwachtes Lernen. Die SOM wird auf einen hyperspektralen Datensatz in der (teil-)überwachten Regression der Bodenfeuchte angewendet und übertrifft ein Standardverfahren des maschinellen Lernens. Das SOM-Framework zeigt eine angemessene Leistung in der (teil-)überwachten Klassifikation der Landbedeckung. Es bietet zusätzliche Visualisierungsmöglichkeiten, um das Verständnis des zugrunde liegenden Datensatzes zu verbessern. In den Studien, die sich mit Herausforderung (II) befassen, werden drei innovative eindimensionale Convolutional Neural Network (CNN) Architekturen entwickelt. Die CNNs werden für eine Bodentexturklassifikation auf einen frei verfügbaren hyperspektralen Datensatz angewendet. Ihre Leistung wird mit zwei bestehenden CNN-Ansätzen und einem Random Forest verglichen. Die beiden wichtigsten Erkenntnisse lassen sich wie folgt zusammenfassen: Erstens zeigen die CNN-Ansätze eine deutlich bessere Leistung als der angewandte nicht-tiefe Random Forest-Ansatz. Zweitens verbessert das Hinzufügen von Informationen über hyperspektrale Bandnummern zur Eingabeschicht eines CNNs die Leistung im Bezug auf die einzelnen Klassen. Die Studien über die Herausforderung (III) basieren auf einem Datensatz, der auf fünf verschiedenen Messgebieten in Peru im Jahr 2019 erfasst wurde. Die Unterschiede zwischen den Messgebieten werden mit qualitativen Methoden und mit unüberwachten maschinellen Lernverfahren, wie zum Beispiel Principal Component Analysis und Autoencoder, analysiert. Basierend auf den Ergebnissen wird eine überwachte Regression der Bodenfeuchte bei verschiedenen Kombinationen von Messgebieten durchgeführt. Zusätzlich wird der Datensatz mit Monte-Carlo-Methoden ergänzt, um die Auswirkungen der Verschiebung der Verteilungen des Datensatzes auf die Regression zu untersuchen. Der angewandte SOM-Regressor ist relativ robust gegenüber dem Rauschen des Bodenfeuchtesensors und zeigt eine gute Leistung bei kleinen Datensätzen, während der angewandte Random Forest auf dem gesamten Datensatz am besten funktioniert. Die Verschiebung der Verteilungen macht diese Regressionsaufgabe schwierig; einige Kombinationen von Messgebieten bilden einen deutlich sinnvolleren Trainingsdatensatz als andere. Insgesamt zeigen die vorgestellten Studien, die sich mit den drei größten Herausforderungen befassen, vielversprechende Ergebnisse. Die Arbeit gibt schließlich Hinweise darauf, wie die entwickelten maschinellen Lernverfahren in der zukünftigen Forschung weiter verbessert werden können
    corecore