844 research outputs found

    Improved Hyperspectral Image Testing Using Synthetic Imagery and Factorial Designed Experiments

    Get PDF
    The goal of any remote sensing system is to gather data about the geography it is imaging. In order to gain knowledge of the earth\u27s landscape, post-processing algorithms are developed to extract information from the collected data. The algorithms can be intended to classify the various ground covers in a scene, identify specific targets of interest, or detect anomalies in an image. After the design of an algorithm comes the difficult task of testing and evaluating its performance. Traditionally, algorithms are tested using sets of extensively ground truthed test images. However, the lack of well characterized test data sets and the significant cost and time issues associated with assembling the data sets contribute to the limitations to this approach. This thesis uses a synthetic image generation model in cooperation with a factorial designed experiment to create a family of images with which to rigorously test the performance of hyperspectral algorithms. The factorial designed experimental approach allowed the joint effects of the sensor\u27s view angle, time of day, atmospheric visibility, and the size of the targets to be studied with respect to algorithm performance. A head-to-head performance comparison of the two tested spectral processing algorithms was also made

    A 3-stage Spectral-spatial Method for Hyperspectral Image Classification

    Full text link
    Hyperspectral images often have hundreds of spectral bands of different wavelengths captured by aircraft or satellites that record land coverage. Identifying detailed classes of pixels becomes feasible due to the enhancement in spectral and spatial resolution of hyperspectral images. In this work, we propose a novel framework that utilizes both spatial and spectral information for classifying pixels in hyperspectral images. The method consists of three stages. In the first stage, the pre-processing stage, Nested Sliding Window algorithm is used to reconstruct the original data by {enhancing the consistency of neighboring pixels} and then Principal Component Analysis is used to reduce the dimension of data. In the second stage, Support Vector Machines are trained to estimate the pixel-wise probability map of each class using the spectral information from the images. Finally, a smoothed total variation model is applied to smooth the class probability vectors by {ensuring spatial connectivity} in the images. We demonstrate the superiority of our method against three state-of-the-art algorithms on six benchmark hyperspectral data sets with 10 to 50 training labels for each class. The results show that our method gives the overall best performance in accuracy. Especially, our gain in accuracy increases when the number of labeled pixels decreases and therefore our method is more advantageous to be applied to problems with small training set. Hence it is of great practical significance since expert annotations are often expensive and difficult to collect.Comment: 18 pages, 9 figure

    A Locally Adaptable Iterative RX Detector

    Get PDF
    We present an unsupervised anomaly detection method for hyperspectral imagery (HSI) based on data characteristics inherit in HSI. A locally adaptive technique of iteratively refining the well-known RX detector (LAIRX) is developed. The technique is motivated by the need for better first- and second-order statistic estimation via avoidance of anomaly presence. Overall, experiments show favorable Receiver Operating Characteristic (ROC) curves when compared to a global anomaly detector based upon the Support Vector Data Description (SVDD) algorithm, the conventional RX detector, and decomposed versions of the LAIRX detector. Furthermore, the utilization of parallel and distributed processing allows fast processing time making LAIRX applicable in an operational setting

    Hyperspectral Imagery Target Detection Using Improved Anomaly Detection and Signature Matching Methods

    Get PDF
    This research extends the field of hyperspectral target detection by developing autonomous anomaly detection and signature matching methodologies that reduce false alarms relative to existing benchmark detectors, and are practical for use in an operational environment. The proposed anomaly detection methodology adapts multivariate outlier detection algorithms for use with hyperspectral datasets containing tens of thousands of non-homogeneous, high-dimensional spectral signatures. In so doing, the limitations of existing, non-robust, anomaly detectors are identified, an autonomous clustering methodology is developed to divide an image into homogeneous background materials, and competing multivariate outlier detection methods are evaluated for their ability to uncover hyperspectral anomalies. To arrive at a final detection algorithm, robust parameter design methods are employed to determine parameter settings that achieve good detection performance over a range of hyperspectral images and targets, thereby removing the burden of these decisions from the user. The final anomaly detection algorithm is tested against existing local and global anomaly detectors, and is shown to achieve superior detection accuracy when applied to a diverse set of hyperspectral images. The proposed signature matching methodology employs image-based atmospheric correction techniques in an automated process to transform a target reflectance signature library into a set of image signatures. This set of signatures is combined with an existing linear filter to form a target detector that is shown to perform as well or better relative to detectors that rely on complicated, information-intensive, atmospheric correction schemes. The performance of the proposed methodology is assessed using a range of target materials in both woodland and desert hyperspectral scenes

    Efficient object tracking in WAAS data streams

    Get PDF
    Wide area airborne surveillance (WAAS) systems are a new class of remote sensing imagers which have many military and civilian applications. These systems are characterized by long loiter times (extended imaging time over fixed target areas) and large footprint target areas. These characteristics complicate moving object detection and tracking due to the large image size and high number of moving objects. This thesis evaluates existing object detection and tracking algorithms with WAAS data and provides enhancements to the processing chain which decrease processing time and increase tracking accuracy. Decreases in processing time are needed to perform real-time or near real-time tracking either on the WAAS sensor platform or in ground station processing centers. Increased tracking accuracy benefits real-time users and forensic (off-line) users. The original contribution of this thesis increases tracking efficiency and accuracy by breaking a WAAS scene into hierarchical areas of interest (AOIs) and through the use of hyperspectral cueing

    Towards the Mitigation of Correlation Effects in the Analysis of Hyperspectral Imagery with Extension to Robust Parameter Design

    Get PDF
    Standard anomaly detectors and classifiers assume data to be uncorrelated and homogeneous, which is not inherent in Hyperspectral Imagery (HSI). To address the detection difficulty, a new method termed Iterative Linear RX (ILRX) uses a line of pixels which shows an advantage over RX, in that it mitigates some of the effects of correlation due to spatial proximity; while the iterative adaptation from Iterative Linear RX (IRX) simultaneously eliminates outliers. In this research, the application of classification algorithms using anomaly detectors to remove potential anomalies from mean vector and covariance matrix estimates and addressing non-homogeneity through cluster analysis, both of which are often ignored when detecting or classifying anomalies, are shown to improve algorithm performance. Global anomaly detectors require the user to provide various parameters to analyze an image. These user-defined settings can be thought of as control variables and certain properties of the imagery can be employed as noise variables. The presence of these separate factors suggests the use of Robust Parameter Design (RPD) to locate optimal settings for an algorithm. This research extends the standard RPD model to include three factor interactions. These new models are then applied to the Autonomous Global Anomaly Detector (AutoGAD) to demonstrate improved setting combinations

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Quantitative Mapping of Soil Property Based on Laboratory and Airborne Hyperspectral Data Using Machine Learning

    Get PDF
    Soil visible and near-infrared spectroscopy provides a non-destructive, rapid and low-cost approach to quantify various soil physical and chemical properties based on their reflectance in the spectral range of 400–2500 nm. With an increasing number of large-scale soil spectral libraries established across the world and new space-borne hyperspectral sensors, there is a need to explore methods to extract informative features from reflectance spectra and produce accurate soil spectroscopic models using machine learning. Features generated from regional or large-scale soil spectral data play a key role in the quantitative spectroscopic model for soil properties. The Land Use/Land Cover Area Frame Survey (LUCAS) soil library was used to explore PLS-derived components and fractal features generated from soil spectra in this study. The gradient-boosting method performed well when coupled with extracted features on the estimation of several soil properties. Transfer learning based on convolutional neural networks (CNNs) was proposed to make the model developed from laboratory data transferable for airborne hyperspectral data. The soil clay map was successfully derived using HyMap imagery and the fine-tuned CNN model developed from LUCAS mineral soils, as deep learning has the potential to learn transferable features that generalise from the source domain to target domain. The external environmental factors like the presence of vegetation restrain the application of imaging spectroscopy. The reflectance data can be transformed into a vegetation suppressed domain with a force invariance approach, the performance of which was evaluated in an agricultural area using CASI airborne hyperspectral data. However, the relationship between vegetation and acquired spectra is complicated, and more efforts should put on removing the effects of external factors to make the model transferable from one sensor to another.:Abstract I Kurzfassung III Table of Contents V List of Figures IX List of Tables XIII List of Abbreviations XV 1 Introduction 1 1.1 Motivation 1 1.2 Soil spectra from different platforms 2 1.3 Soil property quantification using spectral data 4 1.4 Feature representation of soil spectra 5 1.5 Objectives 6 1.6 Thesis structure 7 2 Combining Partial Least Squares and the Gradient-Boosting Method for Soil Property Retrieval Using Visible Near-Infrared Shortwave Infrared Spectra 9 2.1 Abstract 10 2.2 Introduction 10 2.3 Materials and methods 13 2.3.1 The LUCAS soil spectral library 13 2.3.2 Partial least squares algorithm 15 2.3.3 Gradient-Boosted Decision Trees 15 2.3.4 Calculation of relative variable importance 16 2.3.5 Assessment 17 2.4 Results 17 2.4.1 Overview of the spectral measurement 17 2.4.2 Results of PLS regression for the estimation of soil properties 19 2.4.3 Results of PLS-GBDT for the estimation of soil properties 21 2.4.4 Relative important variables derived from PLS regression and the gradient-boosting method 24 2.5 Discussion 28 2.5.1 Dimension reduction for high-dimensional soil spectra 28 2.5.2 GBDT for quantitative soil spectroscopic modelling 29 2.6 Conclusions 30 3 Quantitative Retrieval of Organic Soil Properties from Visible Near-Infrared Shortwave Infrared Spectroscopy Using Fractal-Based Feature Extraction 31 3.1 Abstract 32 3.2 Introduction 32 3.3 Materials and Methods 35 3.3.1 The LUCAS topsoil dataset 35 3.3.2 Fractal feature extraction method 37 3.3.3 Gradient-boosting regression model 37 3.3.4 Evaluation 41 3.4 Results 42 3.4.1 Fractal features for soil spectroscopy 42 3.4.2 Effects of different step and window size on extracted fractal features 45 3.4.3 Modelling soil properties with fractal features 47 3.4.3 Comparison with PLS regression 49 3.5 Discussion 51 3.5.1 The importance of fractal dimension for soil spectra 51 3.5.2 Modelling soil properties with fractal features 52 3.6 Conclusions 53 4 Transfer Learning for Soil Spectroscopy Based on Convolutional Neural Networks and Its Application in Soil Clay Content Mapping Using Hyperspectral Imagery 55 4.1 Abstract 55 4.2 Introduction 56 4.3 Materials and Methods 59 4.3.1 Datasets 59 4.3.2 Methods 62 4.3.3 Assessment 67 4.4 Results and Discussion 67 4.4.1 Interpretation of mineral and organic soils from LUCAS dataset 67 4.4.2 1D-CNN and spectral index for LUCAS soil clay content estimation 69 4.4.3 Application of transfer learning for soil clay content mapping using the pre-trained 1D-CNN model 72 4.4.4 Comparison between spectral index and transfer learning 74 4.4.5 Large-scale soil spectral library for digital soil mapping at the local scale using hyperspectral imagery 75 4.5 Conclusions 75 5 A Case Study of Forced Invariance Approach for Soil Salinity Estimation in Vegetation-Covered Terrain Using Airborne Hyperspectral Imagery 77 5.1 Abstract 78 5.2 Introduction 78 5.3 Materials and Methods 81 5.3.1 Study area of Zhangye Oasis 81 5.3.2 Data description 82 5.3.3 Methods 83 5.3.3 Model performance assessment 85 5.4 Results and Discussion 86 5.4.1 The correlation between NDVI and soil salinity 86 5.4.2 Vegetation suppression performance using the Forced Invariance Approach 86 5.4.3 Estimation of soil properties using airborne hyperspectral data 88 5.5 Conclusions 90 6 Conclusions and Outlook 93 Bibliography 97 Acknowledgements 11

    Evaluation of hierarchical segmentation for natural vegetation: a case study of the Tehachapi Mountains, California

    Get PDF
    abstract: Two critical limitations for hyperspatial imagery are higher imagery variances and large data sizes. Although object-based analyses with a multi-scale framework for diverse object sizes are the solution, more data sources and large amounts of testing at high costs are required. In this study, I used tree density segmentation as the key element of a three-level hierarchical vegetation framework for reducing those costs, and a three-step procedure was used to evaluate its effects. A two-step procedure, which involved environmental stratifications and the random walker algorithm, was used for tree density segmentation. I determined whether variation in tone and texture could be reduced within environmental strata, and whether tree density segmentations could be labeled by species associations. At the final level, two tree density segmentations were partitioned into smaller subsets using eCognition in order to label individual species or tree stands in two test areas of two tree densities, and the Z values of Moran's I were used to evaluate whether imagery objects have different mean values from near segmentations as a measure of segmentation accuracy. The two-step procedure was able to delineating tree density segments and label species types robustly, compared to previous hierarchical frameworks. However, eCognition was not able to produce detailed, reasonable image objects with optimal scale parameters for species labeling. This hierarchical vegetation framework is applicable for fine-scale, time-series vegetation mapping to develop baseline data for evaluating climate change impacts on vegetation at low cost using widely available data and a personal laptop.Dissertation/ThesisM.A. Geography 201
    • …
    corecore