8 research outputs found
Conditional Progressive Generative Adversarial Network for satellite image generation
Image generation and image completion are rapidly evolving fields, thanks to
machine learning algorithms that are able to realistically replace missing
pixels. However, generating large high resolution images, with a large level of
details, presents important computational challenges. In this work, we
formulate the image generation task as completion of an image where one out of
three corners is missing. We then extend this approach to iteratively build
larger images with the same level of detail. Our goal is to obtain a scalable
methodology to generate high resolution samples typically found in satellite
imagery data sets. We introduce a conditional progressive Generative
Adversarial Networks (GAN), that generates the missing tile in an image, using
as input three initial adjacent tiles encoded in a latent vector by a
Wasserstein auto-encoder. We focus on a set of images used by the United
Nations Satellite Centre (UNOSAT) to train flood detection tools, and validate
the quality of synthetic images in a realistic setup.Comment: Published at the SyntheticData4ML Neurips worksho
PulseSatellite: A tool using human-AI feedback loops for satellite image analysis in humanitarian contexts
Humanitarian response to natural disasters and conflicts can be assisted by
satellite image analysis. In a humanitarian context, very specific satellite
image analysis tasks must be done accurately and in a timely manner to provide
operational support. We present PulseSatellite, a collaborative satellite image
analysis tool which leverages neural network models that can be retrained
on-the fly and adapted to specific humanitarian contexts and geographies. We
present two case studies, in mapping shelters and floods respectively, that
illustrate the capabilities of PulseSatellite.Comment: 2 pages, 2 figure
TriggerCit: Early Flood Alerting using Twitter and Geolocation - A Comparison with Alternative Sources
Rapid impact assessment in the immediate aftermath of a natural disaster is
essential to provide adequate information to international organisations, local
authorities, and first responders. Social media can support emergency response
with evidence-based content posted by citizens and organisations during ongoing
events. In the paper, we propose TriggerCit: an early flood alerting tool with
a multilanguage approach focused on timeliness and geolocation. The paper
focuses on assessing the reliability of the approach as a triggering system,
comparing it with alternative sources for alerts, and evaluating the quality
and amount of complementary information gathered. Geolocated visual evidence
extracted from Twitter by TriggerCit was analysed in two case studies on floods
in Thailand and Nepal in 2021.Comment: 12 pages Keywords Social Media, Disaster management, Early Alertin
Deep Learning for Rapid Landslide Detection using Synthetic Aperture Radar (SAR) Datacubes
With climate change predicted to increase the likelihood of landslide events,
there is a growing need for rapid landslide detection technologies that help
inform emergency responses. Synthetic Aperture Radar (SAR) is a remote sensing
technique that can provide measurements of affected areas independent of
weather or lighting conditions. Usage of SAR, however, is hindered by domain
knowledge that is necessary for the pre-processing steps and its interpretation
requires expert knowledge. We provide simplified, pre-processed,
machine-learning ready SAR datacubes for four globally located landslide events
obtained from several Sentinel-1 satellite passes before and after a landslide
triggering event together with segmentation maps of the landslides. From this
dataset, using the Hokkaido, Japan datacube, we study the feasibility of
SAR-based landslide detection with supervised deep learning (DL). Our results
demonstrate that DL models can be used to detect landslides from SAR data,
achieving an Area under the Precision-Recall curve exceeding 0.7. We find that
additional satellite visits enhance detection performance, but that early
detection is possible when SAR data is combined with terrain information from a
digital elevation model. This can be especially useful for time-critical
emergency interventions. Code is made publicly available at
https://github.com/iprapas/landslide-sar-unet.Comment: Accepted in the NeurIPS 2022 workshop on Tackling Climate Change with
Machine Learning. Authors Vanessa Boehm, Wei Ji Leong, Ragini Bal Mahesh,
Ioannis Prapas contributed equally as researchers for the Frontier
Development Lab (FDL) 202
Fully Convolutional Neural Network for Rapid Flood Segmentation in Synthetic Aperture Radar Imagery
Rapid response to natural hazards, such as floods, is essential to mitigate loss of life and the reduction of suffering. For emergency response teams, access to timely and accurate data is essential. Satellite imagery offers a rich source of information which can be analysed to help determine regions affected by a disaster. Much remote sensing flood analysis is semi-automated, with time consuming manual components requiring hours to complete. In this study, we present a fully automated approach to the rapid flood mapping currently carried out by many non-governmental, national and international organisations. We design a Convolutional Neural Network (CNN) based method which isolates the flooded pixels in freely available Copernicus Sentinel-1 Synthetic Aperture Radar (SAR) imagery, requiring no optical bands and minimal pre-processing. We test a variety of CNN architectures and train our models on flood masks generated using a combination of classical semi-automated techniques and extensive manual cleaning and visual inspection. Our methodology reduces the time required to develop a flood map by 80%, while achieving strong performance over a wide range of locations and environmental conditions. Given the open-source data and the minimal image cleaning required, this methodology can also be integrated into end-to-end pipelines for more timely and continuous flood monitoring
Dual-Tasks Siamese Transformer Framework for Building Damage Assessment
Accurate and fine-grained information about the extent of damage to buildings
is essential for humanitarian relief and disaster response. However, as the
most commonly used architecture in remote sensing interpretation tasks,
Convolutional Neural Networks (CNNs) have limited ability to model the
non-local relationship between pixels. Recently, Transformer architecture first
proposed for modeling long-range dependency in natural language processing has
shown promising results in computer vision tasks. Considering the frontier
advances of Transformer architecture in the computer vision field, in this
paper, we present the first attempt at designing a Transformer-based damage
assessment architecture (DamFormer). In DamFormer, a siamese Transformer
encoder is first constructed to extract non-local and representative deep
features from input multitemporal image-pairs. Then, a multitemporal fusion
module is designed to fuse information for downstream tasks. Finally, a
lightweight dual-tasks decoder aggregates multi-level features for final
prediction. To the best of our knowledge, it is the first time that such a deep
Transformer-based network is proposed for multitemporal remote sensing
interpretation tasks. The experimental results on the large-scale damage
assessment dataset xBD demonstrate the potential of the Transformer-based
architecture.Comment: IGARSS 202