6 research outputs found
Recommended from our members
Assimilation of probabilistic flood maps from SAR data into a coupled hydrologic–hydraulic forecasting model: a proof of concept
Coupled hydrologic and hydraulic models represent
powerful tools for simulating streamflow and water levels
along the riverbed and in the floodplain. However, input
data, model parameters, initial conditions, and model structure
represent sources of uncertainty that affect the reliability
and accuracy of flood forecasts. Assimilation of satellitebased
synthetic aperture radar (SAR) observations into a
flood forecasting model is generally used to reduce such uncertainties.
In this context, we have evaluated how sequential
assimilation of flood extent derived from SAR data can help
improve flood forecasts. In particular, we carried out twin
experiments based on a synthetically generated dataset with
controlled uncertainty. To this end, two assimilation methods
are explored and compared: the sequential importance sampling
method (standard method) and its enhanced method
where a tempering coefficient is used to inflate the posterior
probability (adapted method) and reduce degeneracy. The experimental
results show that the assimilation of SAR probabilistic
flood maps significantly improves the predictions of
streamflow and water elevation, thereby confirming the effectiveness
of the data assimilation framework. In addition,
the assimilation method significantly reduces the spatially
averaged root mean square error of water levels with respect
to the case without assimilation. The critical success index of
predicted flood extent maps is significantly increased by the
assimilation. While the standard method proves to be more
accurate in estimating the water levels and streamflow at the
assimilation time step, the adapted method enables a more
persistent improvement of the forecasts. However, although
the use of a tempering coefficient reduces the degeneracy
problem, the accuracy of model simulation is lower than that
of the standard method at the assimilation time step
Multi3Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery
We propose a novel approach for rapid segmentation of flooded buildings by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convolutional neural network. Our model significantly expedites the generation of satellite imagery-based flood maps, crucial for first responders and local authorities in the early stages of flood events. By incorporating multitemporal satellite imagery, our model allows for rapid and accurate post-disaster damage assessment and can be used by governments to better coordinate medium- and long-term financial assistance programs for affected areas. The network consists of multiple streams of encoder-decoder architectures that extract spatiotemporal information from medium-resolution images and spatial information from high-resolution images before fusing the resulting representations into a single medium-resolution segmentation map of flooded buildings. We compare our model to state-of-the-art methods for building footprint segmentation as well as to alternative fusion approaches for the segmentation of flooded buildings and find that our model performs best on both tasks. We also demonstrate that our model produces highly accurate segmentation maps of flooded buildings using only publicly available medium-resolution data instead of significantly more detailed but sparsely available very high-resolution data. We release the first open-source dataset of fully preprocessed and labeled multiresolution, multispectral, and multitemporal satellite images of disaster sites along with our source code