Computational complexity has been the bottleneck of applying physically-based
simulations on large urban areas with high spatial resolution for efficient and
systematic flooding analyses and risk assessments. To address this issue of
long computational time, this paper proposes that the prediction of maximum
water depth rasters can be considered as an image-to-image translation problem
where the results are generated from input elevation rasters using the
information learned from data rather than by conducting simulations, which can
significantly accelerate the prediction process. The proposed approach was
implemented by a deep convolutional neural network trained on flood simulation
data of 18 designed hyetographs on three selected catchments. Multiple tests
with both designed and real rainfall events were performed and the results show
that the flood predictions by neural network uses only 0.5 % of time comparing
with physically-based approaches, with promising accuracy and ability of
generalizations. The proposed neural network can also potentially be applied to
different but relevant problems including flood predictions for urban layout
planning