6 research outputs found

    Automatic Flood Detection in SentineI-2 Images Using Deep Convolutional Neural Networks

    Get PDF
    The early and accurate detection of floods from satellite imagery can aid rescue planning and assessment of geophysical damage. Automatic identification of water from satellite images has historically relied on hand-crafted functions, but these often do not provide the accuracy and robustness needed for accurate and early flood detection. To try to overcome these limitations we investigate a tiered methodology combining water index like features with a deep convolutional neural network based solution to flood identification against the MediaEval 2019 flood dataset. Our method builds on existing deep neural network methods, and in particular the VGG16 network. Specifically, we explored different water indexing techniques and proposed a water index function with the use of Green/SWIR and Blue/NIR bands with VGG16. Our experiment shows that our approach outperformed all other water index technique when combined with VGG16 network in order to detect flood in images

    Flood Detection Using Multi-Modal and Multi-Temporal Images: A Comparative Study

    Get PDF
    Natural disasters such as flooding can severely affect human life and property. To provide rescue through an emergency response team, we need an accurate flooding assessment of the affected area after the event. Traditionally, it requires a lot of human resources to obtain an accurate estimation of a flooded area. In this paper, we compared several traditional machine-learning approaches for flood detection including multi-layer perceptron (MLP), support vector machine (SVM), deep convolutional neural network (DCNN) with recent domain adaptation-based approaches, based on a multi-modal and multi-temporal image dataset. Specifically, we used SPOT-5 and RADAR images from the flood event that occurred in November 2000 in Gloucester, UK. Experimental results show that the domain adaptation-based approach, semi-supervised domain adaptation (SSDA) with 20 labeled data samples, achieved slightly better values of the area under the precision-recall (PR) curve (AUC) of 0.9173 and F1 score of 0.8846 than those by traditional machine approaches. However, SSDA required much less labor for ground-truth labeling and should be recommended in practice

    Disaster Analysis using Satellite Image Data with Knowledge Transfer and Semi-Supervised Learning Techniques

    Get PDF
    With the increase in frequency of disasters and crisis situations like floods, earthquake and hurricanes, the requirement to handle the situation efficiently through disaster response and humanitarian relief has increased. Disasters are mostly unpredictable in nature with respect to their impact on people and property. Moreover, the dynamic and varied nature of disasters makes it difficult to predict their impact accurately for advanced preparation of responses [104]. It is also notable that the economical loss due to natural disasters has increased in recent years, and it, along with the pure humanitarian need, is one of the reasons to research innovative approaches to the mitigation and management of disaster operations efficiently [1]

    Convolutional Neural Networks for Water segmentation using Sentinel-2 Red, Green, Blue (RGB) composites and derived Spectral Indices

    Get PDF
    Near-real time water segmentation with medium resolution satellite imagery plays a critical role in water management. Automated water segmentation of satellite imagery has traditionally been achieved using spectral indices. Spectral water segmentation is limited by environmental factors and requires human expertise to be applied effectively. In recent years, the use of convolutional neural networks (CNN’s) for water segmentation has been successful when used on high-resolution satellite imagery, but to a lesser extent for medium resolution imagery. Existing studies have been limited to geographically localized datasets and reported metrics have been benchmarked against a limited range of spectral indices. This study seeks to determine if a single CNN based on Red, Green, Blue (RGB) image classification can effectively segment water on a global scale and outperform traditional spectral methods. Additionally, this study evaluates the extent to which smaller datasets (of very complex pattern, e.g harbour megacities) can be used to improve globally applicable CNNs within a specific region. Multispectral imagery from the European Space Agency, Sentinel-2 satellite (10 m spatial resolution) was sourced. Test sites were selected in Florida, New York, and Shanghai to represent a globally diverse range of waterbody typologies. Region-specific spectral water segmentation algorithms were developed on each test site, to represent benchmarks of spectral index performance. DeepLabV3-ResNet101 was trained on 33,311 semantically labelled true-colour samples. The resulting model was retrained on three smaller subsets of the data, specific to New York, Shanghai and Florida. CNN predictions reached a maximum mean intersection over union result of 0.986 and F1-Score of 0.983. At the Shanghai test site, the CNN’s predictions outperformed the spectral benchmark, primarily due to the CNN’s ability to process contextual features at multiple scales. In all test cases, retraining the networks to localized subsets of the dataset improved the localized region’s segmentation predictions. The CNN’s presented are suitable for cloud-based deployment and could contribute to the wider use of satellite imagery for water management

    Improving Transfer Learning for Use in Multi-Spectral Data

    Get PDF
    Recently Nasa as well as the European Space Agency have made observational satellites images public. The main reason behind opening it to public is to foster research among university students and corporations alike. Sentinel is a program by the European Space Agency which has plans to release a series of seven satellites in lower earth orbit for observing land and sea patterns. Recently huge datasets have been made public by the Sentinel program. Many advancements have been made in the field of computer vision in the last decade. Krizhevsky, Sutskever & Hinton, 2012, revolutionized the field of image analysis by training deep neural nets and introduced the idea of using convolutions to obtain a high accuracy value on coloured image dataset of more than one million images known as Imagenet ILSVRC. Convolutional Neural Network, or CNN architecture has undergone much improvement since then. One CNN model known as Resnet or Residual Network architecture (He, Zhang, Ren & Sun, 2015) has seen mass acceptance in particular owing to it processing speed and high accuracy. Resnet is widely used for applying features it learned in Imagenet ILSVRC tasks into other image classification or object detection tasks. This concept, in the domain of deep learning, is known as Transfer learning, where a classifier is trained on a bigger more complex task and then learning is transferred to a smaller, more specific task. Transfer learning can often lead to good performance on new smaller tasks and this approach has given state of the art results in several problem domains of image classification and even in object detection (Dai, Li, He, & Sun, 2016). The real problem is that not all the problems in computer vision field belongs to regular RGB images or images consisting of only Red, Green, and Blue band set. For example, a field like medical image analysis has most of the images belonging to greyscale color space, while most of the Remote sensing images collected by satellites belong to multispectral bands of light. Transferring features learned from Imagenet ILSVRC tasks to these fields might give you higher accuracy than training from scratch, but it is a problem of fundamentally incorrect approach. Thus, there is a need to create network models that can learn from single channel or multispectral images iv and can transfer features seamlessly to similar domains with smaller datasets.This thesis presents a study in multispectral image analysis using multiple ways of feature transfer. In this study, Transfer Learning of features is done using a Resnet50 model which is trained on RGB images, and another Resnet50 model which is trained on Greyscale images alone. The dataset used to pretrain these models is a combination of images from ImageNet (Deng, Dong, Socher, Li, Li, & Fei-Fei, 2009) and Eurosat (Helber, Bischke, Dengel, & Borth. 2017). The idea behind choosing Resnet50 is that it has been doing really well in image processing and transfer learning and has outperformed all the other traditional techniques, while still not being computationally prohibitive to train in the context of this work. An attempt is made to classify different land-cover classes in multispectral images taken up by Sentinel 2A satellite. The dataset used here has a key challenge of a smaller number of samples, which means a CNN classifier trained from scratch on these small number of samples will be highly inaccurate and overfitted. This thesis focuses on improving the accuracies of this classifier using transfer learning, and the performance is measured after fine-tuning the baseline above Resnet50 model. The experiment results show that fine-tuning the Greyscale or single channel based Resnet50 model helps in improving the accuracy a bit more than using a RGB trained Resnet50 model for fine tuning, though it haven\u27t achieved great result due to the limitation of lesser computational power and smaller dataset to train a large computer vision network like Resnet50. This work is a contribution towards improving classification in domain of multispectral images usually taken up by satellites. There is no baseline model available right now, which can be used to transfer features to single or multispectral domains like the rest of RGB image field has. The contribution of this work is to build such a classifier for multispectral domain and to extend the state of the art in such computer vision domains
    corecore