54,574 research outputs found

    Image Classification of Marine-Terminating Outlet Glaciers using Deep Learning Methods

    Get PDF
    A wealth of research has focused on elucidating the key controls on mass loss from the Greenland and Antarctic ice sheets in response to climate forcing, specifically in relation to the drivers of marine-terminating outlet glacier change. Despite the burgeoning availability of medium resolution satellite data, the manual methods traditionally used to monitor change of marine-terminating outlet glaciers from satellite imagery are time-consuming and can be subjective, especially where a mélange of icebergs and sea-ice exists at the terminus. To address this, recent advances in deep learning applied to image processing have created a new frontier in the field of automated delineation of glacier termini. However, at this stage, there remains a paucity of research on the use of deep learning for pixel-level semantic image classification of outlet glacier environments. This project develops and tests a two-phase deep learning approach based on a well-established convolutional neural network (CNN) called VGG16 for automated classification of Sentinel-2 satellite images. The novel workflow, termed CNN-Supervised Classification (CSC), was originally developed for fluvial settings but is adapted here to produce multi-class outputs for test imagery of glacial environments containing marine-terminating outlet glaciers in eastern Greenland. Results show mean F1 scores up to 95% for in-sample test imagery and 93% for out-of-sample test imagery, with significant improvements over traditional pixel-based methods such as band ratio techniques. This demonstrates the robustness of the deep learning workflow for automated classification despite the complex characteristics of the imagery. Future research could focus on the integration of deep learning classification workflows with platforms such as Google Earth Engine (GEE), to classify imagery more efficiently and produce datasets for a range of glacial applications without the need for substantial prior experience in coding or deep learning

    Deep Learning For Feature Tracking In Optically Complex Waters

    Get PDF
    PosterEnvironmental monitoring and early warning of water quality from space is now feasible at unprecedented spatial and temporal resolution following the latest generation of satellite sensors. The transformation of this data through classification into labelled, tracked event information is of critical importance to offer a searchable dataset. Advances in image recognition techniques through Deep Learning research have been successfully applied to satellite remote sensing data. Deep Learning approaches that leverage optical satellite data are now being developed for remotely sensed multi- and hyperspectral reflectance. The combination of spectral with spatial feature extracting Deep Learning networks promises a significant improvement in the accuracy of classifiers using remotely sensed data. This project aims to re-tool and optimise spectral-spatial Convolutional Neural Networks originally developed for land classification as a novel approach to identifying and labelling dynamic features in waterbodies, such as algal blooms and sediment plumes in high-resolution satellite sensors

    Feature Learning for Multispectral Satellite Imagery Classification Using Neural Architecture Search

    Get PDF
    Automated classification of remote sensing data is an integral tool for earth scientists, and deep learning has proven very successful at solving such problems. However, building deep learning models to process the data requires expert knowledge of machine learning. We introduce DELTA, a software toolkit to bridge this technical gap and make deep learning easily accessible to earth scientists. Visual feature engineering is a critical part of the machine learning lifecycle, and hence is a key area that will be automated by DELTA. Hand-engineered features can perform well, but require a cross functional team with expertise in both machine learning and the specific problem domain, which is costly in both researcher time and labor. The problem is more acute with multispectral satellite imagery, which requires considerable computational resources to process. In order to automate the feature learning process, a neural architecture search samples the space of asymmetric and symmetric autoencoders using evolutionary algorithms. Since denoising autoencoders have been shown to perform well for feature learning, the autoencoders are trained on various levels of noise and the features generated by the best performing autoencoders evaluated according to their performance on image classification tasks. The resulting features are demonstrated to be effective for Landsat-8 flood mapping, as well as benchmark datasets CIFAR10 and SVHN

    Performance comparison of deep learning models applied for satellite image classification

    Get PDF
    Satellite images classification is important for applications that involve the distribution of the human activities. Such distribution helps the governments to determine the best places to expand cities avoiding problems related to natural disasters or legal constrains. Currently, existing few agencies in charge of image classification and the area to cover is enormous. Therefor an automation of this process is necessary for this task otherwise, it will take an eternity to perform this task manually. On the other hand, detection and classification algorithms used before Machine Learning (ML) have not shown good result classifying this specific sort of images. However, latest approaches for image classification using Convolutional Neural Networks (CNN) have shown quite accurate results. In this research, we analyses the performance in four different CNN architectures used for satellite image classification. We use a dataset provided in 2017 by IARPA names IARPA fMoW. It contains more than two thousand images belonging to 62 classes already separated in train and validation. The solution was implemented in Python using the Keras and Tensorflow libraries. The research was divided in two parts: Hyperparameters optimization and architectures results evaluation. For the first part we used only seven classes from a sample of the dataset (The sample is three hundred times smaller than the complete dataset). The architectures are trained using these seven classes of this small dataset to determine the best hyperparameters. After having selected the hyperparameters the architectures are trained with the complete sample. The evaluation is based on visual examination with the help of the tool Tensorboard and SKLearn metrics. All the architectures showed accuracies near to 90% over the training dataset sample. The architecture with the best accuracy result was Resnet-152 with one accuracy of 99% over the training dataset Sample. The accuracy over the validation dataset will become important after training the architectures with the complete dataset. The training with the complete dataset will be performed in future works.ITESO, A. C

    Classification and Segmentation of Blooms and Plumes from High Resolution Satellite Imagery Using Deep Learning

    Get PDF
    Recent launches of high-resolution satellite sensors mean Earth Observation data are available at an unprecedented spatial and temporal scale. As data collection intensifies, our ability to inspect and investigate individual scenes for harmful algal or cyanobacterial blooms becomes limited, particularly for global monitoring. Algal Blooms and River Plumes are visible to trained experts in high resolution satellite imagery from Red-Green-Blue composites. Therefore, computer-assisted detection and classification of these events would provide invaluable information to experts and the general public on everyday water use. Advances in image recognition through Deep Learning techniques offer solutions that can accurately detect, classify and segment objects across thousands of images in a fraction of the time a human operator would require, while inspecting these images with much greater detail. Deep Learning techniques that jointly leverage spectral-spatial data are well characterised as a solution to land classification problems and have been shown to be accurate when using multi- or hyper-spectral data such as collected by the Sentinel-2 MultiSpectral Instrument. This work develops on state-of-the-art natural image segmentation algorithms to evaluate the capability of Deep Learning to detect and outline the presence of Algal Blooms or River Plumes in Sentinel 2 MSI data. The challenges in the application of these approaches are highlighted in the availability of suitable training and benchmark data, the use of atmospheric correction and neural network architecture design for utilisation of spectral data. Current Deep Learning network architectures are evaluated to establish best approaches to leverage spectral and spatial data in the context of water classification. Several spectral data configurations are used to evaluate competency and suitability for generalisation to other Optical Satellite Sensor configurations. The impact of the atmospheric correction technique applied to data is explored to establish the most reliable data for use during training and requirements for pre-processing data pipelines. Finally a training dataset and associated Deep Learning method is presented for use in future work relating to water contents classification

    Quantifying Seagrass Distribution in Coastal Water With Deep Learning Models

    Get PDF
    Coastal ecosystems are critically affected by seagrass, both economically and ecologically. However, reliable seagrass distribution information is lacking in nearly all parts of the world because of the excessive costs associated with its assessment. In this paper, we develop two deep learning models for automatic seagrass distribution quantification based on 8-band satellite imagery. Specifically, we implemented a deep capsule network (DCN) and a deep convolutional neural network (CNN) to assess seagrass distribution through regression. The DCN model first determines whether seagrass is presented in the image through classification. Second, if seagrass is presented in the image, it quantifies the seagrass through regression. During training, the regression and classification modules are jointly optimized to achieve end-to-end learning. The CNN model is strictly trained for regression in seagrass and non-seagrass patches. In addition, we propose a transfer learning approach to transfer knowledge in the trained deep models at one location to perform seagrass quantification at a different location. We evaluate the proposed methods in three WorldView-2 satellite images taken from the coastal area in Florida. Experimental results show that the proposed deep DCN and CNN models performed similarly and achieved much better results than a linear regression model and a support vector machine. We also demonstrate that using transfer learning techniques for the quantification of seagrass significantly improved the results as compared to directly applying the deep models to new locations

    WorldView-2 Satellite Image Classification using U-Net Deep Learning Model

    Get PDF
    Land cover maps are important documents for local governments to perform urban planning and management. A field survey using measuring instruments can produce an accurate land cover map. However, this method is time-consuming, expensive, and labor-intensive. A number of researchers have proposed using remote sensing, which generates land cover maps using an optical satellite image with various statistical classification procedures. Recently, artificial intelligence (AI) technology, such as deep learning, has been used in multiple fields, including satellite image classification, with satisfactory results. In this study, a WorldView-2 image of Terangun in Aceh Province, which was acquired on Aug 2, 2016, was classified using a commonly used deep-learning-based classification, namely, U-net. There were eight classes used in the experiment: building, road, open land (such as green open space, bare land, grass, or low vegetation), river, farm, field, aquaculture pond, and garden. For comparison, three classification methods: maximum-likelihood, random forest, and support vector machine, were performed compared to U-Net. A land cover map provided by the government was used as a reference to evaluate the accuracy of land cover maps generated using two classification methods. The results with 100 randomly selected pixels revealed that U-Net was able to obtain a 72% and 0.585 for overall and kappa accuracy, respectively; whereas, overall accuracy and kappa accuracy for the maximum likelihood, random forest and support vector machine methods were  49% and 0.148; 59% and 0.392; and 67% and 0. 511; respectively. Therefore, U-Net outperformed those three of classification methods in classifying the image. &nbsp
    corecore