185 research outputs found
Using a Grid-Enabled Wireless Sensor Network for Flood Management
Flooding is becoming an increasing problem. As a result there is a need to deploy more sophisticated sensor networks to detect and react to flooding. This paper outlines a demonstration that illustrates our proposed solution to this problem involving embedded wireless hardware, component based middleware and overlay networks
On the use of global flood forecasts and satellite-derived inundation maps for flood monitoring in data-sparse regions
Early flood warning and real-time monitoring systems play a key role in flood risk reduction and disaster response decisions. Global-scale flood forecasting and satellite-based flood detection systems are currently operating, however their reliability for decision making applications needs to be assessed. In this study, we performed comparative evaluations of several operational global flood forecasting and flood detection systems, using 10 major flood events recorded over 2012-2014. Specifically, we evaluated the spatial extent and temporal characteristics of flood detections from the Global Flood Detection System (GFDS) and the Global Flood Awareness System (GloFAS). Furthermore, we compared the GFDS flood maps with those from NASA’s two Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results reveal that: 1) general agreement was found between the GFDS and MODIS flood detection systems, 2) large differences exist in the spatio-temporal characteristics of the GFDS detections and GloFAS forecasts, and 3) the quantitative validation of global flood disasters in data-sparse regions is highly challenging. Overall, the satellite remote sensing provides useful near real-time flood information that can be useful for risk management. We highlight the known limitations of global flood detection and forecasting systems, and propose ways forward to improve the reliability of large scale flood monitoring tools.JRC.H.7-Climate Risk Managemen
Global meteorological drought – Part 2: Seasonal forecasts
Global seasonal forecasts of meteorological drought using the standardized
precipitation index (SPI) are produced using two data sets as initial
conditions: the Global Precipitation Climatology Centre (GPCC) and the European Centre for Medium-Range Weather Forecasts
(ECMWF) ERA-Interim reanalysis (ERAI); and two seasonal forecasts of precipitation,
the most recent ECMWF seasonal forecast system and climatologically based
ensemble forecasts. The forecast evaluation focuses on the periods where
precipitation deficits are likely to have higher drought impacts, and the
results were summarized over different regions in the world. The verification
of the forecasts with lead time indicated that generally for all regions the
least reduction on skill was found for (i) long lead times using ERAI or
GPCC for monitoring and (ii) short lead times using ECMWF or climatological
seasonal forecasts. The memory effect of initial conditions was found to be
1 month of lead time for the SPI-3, 4 months for the SPI-6 and 6 (or more)
months for the SPI-12. Results show that dynamical forecasts of precipitation
provide added value with skills at least equal to and often above that of
climatological forecasts. Furthermore, it is very difficult to improve on the
use of climatological forecasts for long lead times. Our results also support
recent questions of whether seasonal forecasting of global drought onset was
essentially a stochastic forecasting problem. Results are presented
regionally and globally, and our results point to several regions in the
world where drought onset forecasting is feasible and skilful
Global-scale evaluation of 22 precipitation datasets using gauge observations and hydrological modeling
Abstract. We undertook a comprehensive evaluation of 22 gridded (quasi-)global (sub-)daily precipitation (P) datasets for the period 2000–2016. Thirteen non-gauge-corrected P datasets were evaluated using daily P gauge observations from 76 086 gauges worldwide. Another nine gauge-corrected datasets were evaluated using hydrological modeling, by calibrating the HBV conceptual model against streamflow records for each of 9053 small to medium-sized ( < 50 000 km2) catchments worldwide, and comparing the resulting performance. Marked differences in spatio-temporal patterns and accuracy were found among the datasets. Among the uncorrected P datasets, the satellite- and reanalysis-based MSWEP-ng V1.2 and V2.0 datasets generally showed the best temporal correlations with the gauge observations, followed by the reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR) and the satellite- and reanalysis-based CHIRP V2.0 dataset, the estimates based primarily on passive microwave remote sensing of rainfall (CMORPH V1.0, GSMaP V5/6, and TMPA 3B42RT V7) or near-surface soil moisture (SM2RAIN-ASCAT), and finally, estimates based primarily on thermal infrared imagery (GridSat V1.0, PERSIANN, and PERSIANN-CCS). Two of the three reanalyses (ERA-Interim and JRA-55) unexpectedly obtained lower trend errors than the satellite datasets. Among the corrected P datasets, the ones directly incorporating daily gauge data (CPC Unified, and MSWEP V1.2 and V2.0) generally provided the best calibration scores, although the good performance of the fully gauge-based CPC Unified is unlikely to translate to sparsely or ungauged regions. Next best results were obtained with P estimates directly incorporating temporally coarser gauge data (CHIRPS V2.0, GPCP-1DD V1.2, TMPA 3B42 V7, and WFDEI-CRU), which in turn outperformed the one indirectly incorporating gauge data through another multi-source dataset (PERSIANN-CDR V1R1). Our results highlight large differences in estimation accuracy, and hence the importance of P dataset selection in both research and operational applications. The good performance of MSWEP emphasizes that careful data merging can exploit the complementary strengths of gauge-, satellite-, and reanalysis-based P estimates
Recommended from our members
Global forecasting of thermal health hazards: the skill of probabilistic predictions of the Universal Thermal Climate Index (UTCI)
Although over a hundred thermal indices can be used for assessing thermal health hazards, many ignore the human heat budget, physiology and clothing. The Universal Thermal Climate Index (UTCI) addresses these shortcomings by using an advanced thermo-physiological model. This paper assesses the potential of using the UTCI for forecasting thermal health hazards. Traditionally, such hazard forecasting has had two further limitations: it has been narrowly focused on a particular region or nation and has relied on the use of single ‘deterministic’ forecasts. Here, the UTCI is computed on a global scale,which is essential for international health-hazard warnings and disaster preparedness, and it is provided as a probabilistic forecast. It is shown that probabilistic UTCI forecasts are superior in skill to deterministic forecasts and that despite global variations, the UTCI forecast is skilful for lead times up to 10 days. The paper also demonstrates the utility of probabilistic UTCI forecasts on the example of the 2010 heat wave in Russia
Recommended from our members
Assessing heat-related health risk in Europe via the Universal Thermal Climate Index (UTCI)
In this work the potential of the Universal Thermal Climate Index (UTCI) as a heat-related health risk indicator in Europe is demonstrated. The UTCI is a bioclimate index that uses a multi-node human heat balance model to represent the heat stress induced by meteorological conditions to the human body. Using 38 years of meteorological reanalysis data, UTCI maps were computed to assess the thermal bioclimate of Europe for the summer season. Patterns of heat stress conditions and non-thermal stress regions are identified across Europe. An increase in heat stress up to 1°C is observed during recent decades. Correlation with mortality data from 17 European countries revealed that the relationship between the UTCI and death counts depends on the bioclimate of the country, and death counts increase in conditions of moderate and strong stress, i.e. when UTCI is above 26°C and 32°C. The UTCI’s ability to represent mortality patterns is demonstrated for the 2003 European heatwave. These findings confirm the importance of UTCI as a bioclimatic index that is able to both capture the thermal bioclimatic variability of Europe, and relate such variability with the effects it has on human health
Recommended from our members
ERA-Interim/Land: a global land surface reanalysis data set
ERA-Interim/Land is a global land surface reanalysis data set covering the period 1979–2010. It describes the evolution of soil moisture, soil temperature and snowpack. ERA-Interim/Land is the result of a single 32-year simulation with the latest ECMWF (European Centre for Medium-Range Weather Forecasts) land surface model driven by meteorological forcing from the ERA-Interim atmospheric reanalysis and precipitation adjustments based on monthly GPCP v2.1 (Global Precipitation Climatology Project). The horizontal resolution is about 80 km and the time frequency is 3-hourly. ERA-Interim/Land includes a number of parameterization improvements in the land surface scheme with respect to the original ERA-Interim data set, which makes it more suitable for climate studies involving land water resources. The quality of ERA-Interim/Land is assessed by comparing with ground-based and remote sensing observations. In particular, estimates of soil moisture, snow depth, surface albedo, turbulent latent and sensible fluxes, and river discharges are verified against a large number of site measurements. ERA-Interim/Land provides a global integrated and coherent estimate of soil moisture and snow water equivalent, which can also be used for the initialization of numerical weather prediction and climate models
The credibility challenge for global fluvial flood risk analysis
Quantifying flood hazard is an essential component of resilience planning, emergency response, and mitigation, including insurance. Traditionally undertaken at catchment and national scales, recently, efforts have intensified to estimate flood risk globally to better allow consistent and equitable decision making. Global flood hazard models are now a practical reality, thanks to improvements in numerical algorithms, global datasets, computing power, and coupled modelling frameworks. Outputs of these models are vital for consistent quantification of global flood risk and in projecting the impacts of climate change. However, the urgency of these tasks means that outputs are being used as soon as they are made available and before such methods have been adequately tested. To address this, we compare multi-probability flood hazard maps for Africa from six global models and show wide variation in their flood hazard, economic loss and exposed population estimates, which has serious implications for model credibility. While there is around 30-40% agreement in flood extent, our results show that even at continental scales, there are significant differences in hazard magnitude and spatial pattern between models, notably in deltas, arid/semi-arid zones and wetlands. This study is an important step towards a better understanding of modelling global flood hazard, which is urgently required for both current risk and climate change projections
OpenIFS@home version 1: a citizen science project for ensemble weather and climate forecasting
Weather forecasts rely heavily on general circulation models of
the atmosphere and other components of the Earth system. National
meteorological and hydrological services and intergovernmental
organizations, such as the European Centre for Medium-Range Weather
Forecasts (ECMWF), provide routine operational forecasts on a range of
spatio-temporal scales by running these models at high resolution on
state-of-the-art high-performance computing systems. Such operational
forecasts are very demanding in terms of computing resources. To facilitate
the use of a weather forecast model for research and training purposes
outside the operational environment, ECMWF provides a portable version of
its numerical weather forecast model, OpenIFS, for use by universities and
other research institutes on their own computing systems.
In this paper, we describe a new project (OpenIFS@home) that combines
OpenIFS with a citizen science approach to involve the general public in
helping conduct scientific experiments. Volunteers from across the world can
run OpenIFS@home on their computers at home, and the results of these
simulations can be combined into large forecast ensembles. The
infrastructure of such distributed computing experiments is based on our
experience and expertise with the climateprediction.net (https://www.climateprediction.net/, last access: 1 June 2021) and
weather@home systems.
In order to validate this first use of OpenIFS in a volunteer computing
framework, we present results from ensembles of forecast simulations of
Tropical Cyclone Karl from September 2016 studied during the NAWDEX field
campaign. This cyclone underwent extratropical transition and intensified in
mid-latitudes to give rise to an intense jet streak near Scotland and heavy
rainfall over Norway. For the validation we use a 2000-member
ensemble of OpenIFS run on the OpenIFS@home volunteer framework and a
smaller ensemble of the size of operational forecasts using ECMWF's forecast
model in 2016 run on the ECMWF supercomputer with the same horizontal
resolution as OpenIFS@home. We present ensemble statistics that illustrate
the reliability and accuracy of the OpenIFS@home forecasts and
discuss the use of large ensembles in the context of forecasting extreme
events.</p
Daily evaluation of 26 precipitation datasets using Stage-IV gauge-radar data for the CONUS
New precipitation (P) datasets are released regularly, following
innovations in weather forecasting models, satellite retrieval methods, and
multi-source merging techniques. Using the conterminous US as a case study,
we evaluated the performance of 26 gridded (sub-)daily P datasets to obtain
insight into the merit of these innovations. The evaluation was performed at
a daily timescale for the period 2008–2017 using the Kling–Gupta efficiency
(KGE), a performance metric combining correlation, bias, and variability. As
a reference, we used the high-resolution (4 km) Stage-IV gauge-radar P
dataset. Among the three KGE components, the P datasets performed worst
overall in terms of correlation (related to event identification). In terms
of improving KGE scores for these datasets, improved P totals (affecting
the bias score) and improved distribution of P intensity (affecting the
variability score) are of secondary importance. Among the 11 gauge-corrected
P datasets, the best overall performance was obtained by MSWEP V2.2,
underscoring the importance of applying daily gauge corrections and
accounting for gauge reporting times. Several uncorrected P datasets
outperformed gauge-corrected ones. Among the 15 uncorrected P datasets, the
best performance was obtained by the ERA5-HRES fourth-generation reanalysis,
reflecting the significant advances in earth system modeling during the last
decade. The (re)analyses generally performed better in winter than in summer,
while the opposite was the case for the satellite-based datasets. IMERGHH V05
performed substantially better than TMPA-3B42RT V7, attributable to the many
improvements implemented in the IMERG satellite P retrieval algorithm.
IMERGHH V05 outperformed ERA5-HRES in regions dominated by convective storms,
while the opposite was observed in regions of complex terrain. The ERA5-EDA
ensemble average exhibited higher correlations than the ERA5-HRES
deterministic run, highlighting the value of ensemble modeling. The WRF
regional convection-permitting climate model showed considerably more
accurate P totals over the mountainous west and performed best among the
uncorrected datasets in terms of variability, suggesting there is merit in
using high-resolution models to obtain climatological P statistics. Our
findings provide some guidance to choose the most suitable P dataset for a
particular application.</p
- …