5,599 research outputs found

    Multi-modal Deep Learning Approach for Flood Detection

    Get PDF
    In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task

    Flood Detection Using Multi-Modal and Multi-Temporal Images: A Comparative Study

    Get PDF
    Natural disasters such as flooding can severely affect human life and property. To provide rescue through an emergency response team, we need an accurate flooding assessment of the affected area after the event. Traditionally, it requires a lot of human resources to obtain an accurate estimation of a flooded area. In this paper, we compared several traditional machine-learning approaches for flood detection including multi-layer perceptron (MLP), support vector machine (SVM), deep convolutional neural network (DCNN) with recent domain adaptation-based approaches, based on a multi-modal and multi-temporal image dataset. Specifically, we used SPOT-5 and RADAR images from the flood event that occurred in November 2000 in Gloucester, UK. Experimental results show that the domain adaptation-based approach, semi-supervised domain adaptation (SSDA) with 20 labeled data samples, achieved slightly better values of the area under the precision-recall (PR) curve (AUC) of 0.9173 and F1 score of 0.8846 than those by traditional machine approaches. However, SSDA required much less labor for ground-truth labeling and should be recommended in practice

    Semi-supervised Convolutional Neural Networks for Flood Mapping using Multi-modal Remote Sensing Data

    Full text link
    When floods hit populated areas, quick detection of flooded areas is crucial for initial response by local government, residents, and volunteers. Space-borne polarimetric synthetic aperture radar (PolSAR) is an authoritative data sources for flood mapping since it can be acquired immediately after a disaster even at night time or cloudy weather. Conventionally, a lot of domain-specific heuristic knowledge has been applied for PolSAR flood mapping, but their performance still suffers from confusing pixels caused by irregular reflections of radar waves. Optical images are another data source that can be used to detect flooded areas due to their high spectral correlation with the open water surface. However, they are often affected by day, night, or severe weather conditions (i.e., cloud). This paper presents a convolution neural network (CNN) based multimodal approach utilizing the advantages of both PolSAR and optical images for flood mapping. First, reference training data is retrieved from optical images by manual annotation. Since clouds may appear in the optical image, only areas with a clear view of flooded or non-flooded are annotated. Then, a semisupervised polarimetric-features-aided CNN is utilized for flood mapping using PolSAR data. The proposed model not only can handle the issue of learning with incomplete ground truth but also can leverage a large portion of unlabelled pixels for learning. Moreover, our model takes the advantages of expert knowledge on scattering interpretation to incorporate polarimetric-features as the input. Experiments results are given for the flood event that occurred in Sendai, Japan, on 12th March 2011. The experiments show that our framework can map flooded area with high accuracy (F1 = 96:12) and outperform conventional flood mapping methods

    A deep multi-modal neural network for informative Twitter content classification during emergencies

    Get PDF
    YesPeople start posting tweets containing texts, images, and videos as soon as a disaster hits an area. The analysis of these disaster-related tweet texts, images, and videos can help humanitarian response organizations in better decision-making and prioritizing their tasks. Finding the informative contents which can help in decision making out of the massive volume of Twitter content is a difficult task and require a system to filter out the informative contents. In this paper, we present a multi-modal approach to identify disaster-related informative content from the Twitter streams using text and images together. Our approach is based on long-short-term-memory (LSTM) and VGG-16 networks that show significant improvement in the performance, as evident from the validation result on seven different disaster-related datasets. The range of F1-score varied from 0.74 to 0.93 when tweet texts and images used together, whereas, in the case of only tweet text, it varies from 0.61 to 0.92. From this result, it is evident that the proposed multi-modal system is performing significantly well in identifying disaster-related informative social media contents

    Cross Modal Distillation for Flood Extent Mapping

    Full text link
    The increasing intensity and frequency of floods is one of the many consequences of our changing climate. In this work, we explore ML techniques that improve the flood detection module of an operational early flood warning system. Our method exploits an unlabelled dataset of paired multi-spectral and Synthetic Aperture Radar (SAR) imagery to reduce the labeling requirements of a purely supervised learning method. Prior works have used unlabelled data by creating weak labels out of them. However, from our experiments we noticed that such a model still ends up learning the label mistakes in those weak labels. Motivated by knowledge distillation and semi supervised learning, we explore the use of a teacher to train a student with the help of a small hand labelled dataset and a large unlabelled dataset. Unlike the conventional self distillation setup, we propose a cross modal distillation framework that transfers supervision from a teacher trained on richer modality (multi-spectral images) to a student model trained on SAR imagery. The trained models are then tested on the Sen1Floods11 dataset. Our model outperforms the Sen1Floods11 baseline model trained on the weak labeled SAR imagery by an absolute margin of 6.53% Intersection-over-Union (IoU) on the test split

    Disaster Analysis using Satellite Image Data with Knowledge Transfer and Semi-Supervised Learning Techniques

    Get PDF
    With the increase in frequency of disasters and crisis situations like floods, earthquake and hurricanes, the requirement to handle the situation efficiently through disaster response and humanitarian relief has increased. Disasters are mostly unpredictable in nature with respect to their impact on people and property. Moreover, the dynamic and varied nature of disasters makes it difficult to predict their impact accurately for advanced preparation of responses [104]. It is also notable that the economical loss due to natural disasters has increased in recent years, and it, along with the pure humanitarian need, is one of the reasons to research innovative approaches to the mitigation and management of disaster operations efficiently [1]
    corecore