76 research outputs found

    AI-Based Flood Event Understanding and Quantifying Using Online Media and Satellite Data

    Get PDF
    In this paper we study the problem of flood detection and quantification using online media and satellite data. We present a three approaches, two of them based on neural networks and a third one based on the combination of different bands of satellite images. This work aims to detect floods and also give relevant information about the flood situation such as the water level and the extension of the flooded regions, as specified in the three subtasks, for which of them we propose a specific solution

    Multimodal Prediction based on Graph Representations

    Full text link
    This paper proposes a learning model, based on rank-fusion graphs, for general applicability in multimodal prediction tasks, such as multimodal regression and image classification. Rank-fusion graphs encode information from multiple descriptors and retrieval models, thus being able to capture underlying relationships between modalities, samples, and the collection itself. The solution is based on the encoding of multiple ranks for a query (or test sample), defined according to different criteria, into a graph. Later, we project the generated graph into an induced vector space, creating fusion vectors, targeting broader generality and efficiency. A fusion vector estimator is then built to infer whether a multimodal input object refers to a class or not. Our method is capable of promoting a fusion model better than early-fusion and late-fusion alternatives. Performed experiments in the context of multiple multimodal and visual datasets, as well as several descriptors and retrieval models, demonstrate that our learning model is highly effective for different prediction scenarios involving visual, textual, and multimodal features, yielding better effectiveness than state-of-the-art methods

    Automatic detection of passable roads after floods in remote sensed and social media data

    Get PDF
    This paper addresses the problem of floods classification and floods aftermath detection based on both social media and satellite imagery. Automatic detection of disasters such as floods is still a very challenging task. The focus lies on identifying passable routes or roads during floods. Two novel solutions are presented, which were developed for two corresponding tasks at the MediaEval 2018 benchmarking challenge. The tasks are (i) identification of images providing evidence for road passability and (ii) differentiation and detection of passable and non-passable roads in images from two complementary sources of information. For the first challenge, we mainly rely on object and scene-level features extracted through multiple deep models pre-trained on the ImageNet and Places datasets. The object and scene-level features are then combined using early, late and double fusion techniques. To identify whether or not it is possible for a vehicle to pass a road in satellite images, we rely on Convolutional Neural Networks and a transfer learning-based classification approach. The evaluation of the proposed methods is carried out on the large-scale datasets provided for the benchmark competition. The results demonstrate significant improvement in the performance over the recent state-of-art approaches

    Multi-modal Deep Learning Approach for Flood Detection

    Get PDF
    In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task

    BMC@MediaEval 2017 multimedia satellite task via regression random forest

    Full text link
    © 2017 Author/owner(s). In the MediaEval 2017 Multimedia Satellite Task, we propose an approach based on regression random forest which can extract valuable information from a few images and their corresponding metadata. The experimental results show that when processing social media images, the proposed method can be high-performance in circumstances where the images features are low-level and the training samples are relatively small of number.Additionally,when the low-level color features of satellite images are too ambiguous to analyze, random forest is also a efiective way to detect flooding area
    corecore