3,075 research outputs found

    Using high resolution optical imagery to detect earthquake-induced liquefaction: the 2011 Christchurch earthquake

    Get PDF
    Using automated supervised methods with satellite and aerial imageries for liquefaction mapping is a promising step in providing detailed and region-scale maps of liquefaction extent immediately after an earthquake. The accuracy of these methods depends on the quantity and quality of training samples and the number of available spectral bands. Digitizing a large number of high-quality training samples from an event may not be feasible in the desired timeframe for rapid response as the training pixels for each class should be typical and accurately represent the spectral diversity of that specific class. To perform automated classification for liquefaction detection, we need to understand how to build the optimal and accurate training dataset. Using multispectral optical imagery from the 22 February, 2011 Christchurch earthquake, we investigate the effects of quantity of high-quality training pixel samples as well as the number of spectral bands on the performance of a pixel-based parametric supervised maximum likelihood classifier for liquefaction detection. We find that the liquefaction surface effects are bimodal in terms of spectral signature and therefore, should be classified as either wet liquefaction or dry liquefaction. This is due to the difference in water content between these two modes. Using 5-fold cross-validation method, we evaluate performance of the classifier on datasets with different pixel sizes of 50, 100, 500, 2000, and 4000. Also, the effect of adding spectral information was investigated by adding once only the near infrared (NIR) band to the visible red, green, and blue (RGB) bands and the other time using all available 8 spectral bands of the World-View 2 satellite imagery. We find that the classifier has high accuracies (75%–95%) when using the 2000 pixels-size dataset that includes the RGB+NIR spectral bands and therefore, increasing to 4000 pixels-size dataset and/or eight spectral bands may not be worth the required time and cost. We also investigate accuracies of the classifier when using aerial imagery with same number of training pixels and either RGB or RGB+NIR bands and find that the classifier accuracies are higher when using satellite imagery with same number of training pixels and spectral information. The classifier identifies dry liquefaction with higher user accuracy than wet liquefaction across all evaluated scenarios. To improve classification performance for wet liquefaction detection, we also investigate adding geospatial information of building footprints to improve classification performance. We find that using a building footprint mask to remove them from the classification process, increases wet liquefaction user accuracy by roughly 10%.Published versio

    Evaluation of Machine Learning Algorithms for Lake Ice Classification from Optical Remote Sensing Data

    Get PDF
    The topic of lake ice cover mapping from satellite remote sensing data has gained interest in recent years since it allows the extent of lake ice and the dynamics of ice phenology over large areas to be monitored. Mapping lake ice extent can record the loss of the perennial ice cover for lakes located in the High Arctic. Moreover, ice phenology dates, retrieved from lake ice maps, are useful for assessing long-term trends and variability in climate, particularly due to their sensitivity to changes in near-surface air temperature. However, existing knowledge-driven (threshold-based) retrieval algorithms for lake ice-water classification that use top-of-the-atmosphere (TOA) reflectance products do not perform well under the condition of large solar zenith angles, resulting in low TOA reflectance. Machine learning (ML) techniques have received considerable attention in the remote sensing field for the past several decades, but they have not yet been applied in lake ice classification from optical remote sensing imagery. Therefore, this research has evaluated the capability of ML classifiers to enhance lake ice mapping using multispectral optical remote sensing data (MODIS L1B (TOA) product). Chapter 3, the main manuscript of this thesis, presents an investigation of four ML classifiers (i.e. multinomial logistic regression, MLR; support vector machine, SVM; random forest, RF; gradient boosting trees, GBT) in lake ice classification. Results are reported using 17 lakes located in the Northern Hemisphere, which represent different characteristics regarding area, altitude, freezing frequency, and ice cover duration. According to the overall accuracy assessment using a random k-fold cross-validation (k = 100), all ML classifiers were able to produce classification accuracies above 94%, and RF and GBT provided above 98% classification accuracies. Moreover, the RF and GBT algorithms provided a more visually accurate depiction of lake ice cover under challenging conditions (i.e., high solar zenith angles, black ice, and thin cloud cover). The two tree-based classifiers were found to provide the most robust spatial transferability over the 17 lakes and performed consistently well across three ice seasons, better than the other classifiers. Moreover, RF was insensitive to the choice of the hyperparameters compared to the other three classifiers. The results demonstrate that RF and GBT provide a great potential to map accurately lake ice cover globally over a long time-series. Additionally, a case study applying a convolution neural network (CNN) model for ice classification in Great Slave Lake, Canada is presented in Appendix A. Eighteen images acquired during the the ice season of 2009-2010 were used in this study. The proposed CNN produced a 98.03% accuracy with the testing dataset; however, the accuracy dropped to 90.13% using an independent (out-of-sample) validation dataset. Results show the powerful learning performance of the proposed CNN with the testing data accuracy obtained. At the same time, the accuracy reduction of the validation dataset indicates the overfitting behavior of the proposed model. A follow-up investigation would be needed to improve its performance. This thesis investigated the capability of ML algorithms (both pixel-based and spatial-based) in lake ice classification from the MODIS L1B product. Overall, ML techniques showed promising performances for lake ice cover mapping from the optical remote sensing data. The tree-based classifiers (pixel-based) exhibited the potential to produce accurate lake ice classification at a large-scale over long time-series. In addition, more work would be of benefit for improving the application of CNN in lake ice cover mapping from optical remote sensing imagery

    Introducing artificial data generation in active learning for land use/land cover classification

    Get PDF
    Fonseca, J., Douzas, G., & Bacao, F. (2021). Increasing the effectiveness of active learning: Introducing artificial data generation in active learning for land use/land cover classification. Remote Sensing, 13(13), 1-20. [2619]. https://doi.org/10.3390/rs13132619In remote sensing, Active Learning (AL) has become an important technique to collect informative ground truth data “on-demand” for supervised classification tasks. Despite its effectiveness, it is still significantly reliant on user interaction, which makes it both expensive and time consuming to implement. Most of the current literature focuses on the optimization of AL by modifying the selection criteria and the classifiers used. Although improvements in these areas will result in more effective data collection, the use of artificial data sources to reduce human–computer interaction remains unexplored. In this paper, we introduce a new component to the typical AL framework, the data generator, a source of artificial data to reduce the amount of user-labeled data required in AL. The implementation of the proposed AL framework is done using Geometric SMOTE as the data generator. We compare the new AL framework to the original one using similar acquisition functions and classifiers over three AL-specific performance metrics in seven benchmark datasets. We show that this modification of the AL framework significantly reduces cost and time requirements for a successful AL implementation in all of the datasets used in the experiment.publishersversionpublishe

    A novel spectral-spatial co-training algorithm for the transductive classification of hyperspectral imagery data

    Get PDF
    The automatic classification of hyperspectral data is made complex by several factors, such as the high cost of true sample labeling coupled with the high number of spectral bands, as well as the spatial correlation of the spectral signature. In this paper, a transductive collective classifier is proposed for dealing with all these factors in hyperspectral image classification. The transductive inference paradigm allows us to reduce the inference error for the given set of unlabeled data, as sparsely labeled pixels are learned by accounting for both labeled and unlabeled information. The collective inference paradigm allows us to manage the spatial correlation between spectral responses of neighboring pixels, as interacting pixels are labeled simultaneously. In particular, the innovative contribution of this study includes: (1) the design of an application-specific co-training schema to use both spectral information and spatial information, iteratively extracted at the object (set of pixels) level via collective inference; (2) the formulation of a spatial-aware example selection schema that accounts for the spatial correlation of predicted labels to augment training sets during iterative learning and (3) the investigation of a diversity class criterion that allows us to speed-up co-training classification. Experimental results validate the accuracy and efficiency of the proposed spectral-spatial, collective, co-training strategy
    • …
    corecore