527 research outputs found

    Inundation scenarios for flood damage evaluation in polder areas

    Get PDF
    We present an approach for flood damage simulations through the creation of a comparatively large number of inundation scenarios for a polder area, using a high-resolution digital elevation model. In particular, the method could be used for detailed scenario studies of the impact of future socioeconomic and climatic developments on flood risks. The approach is applied to a case-study area in the south of the Netherlands along the river Meuse. The advantage of our approach is that a large number of potential flood events can be created relatively fast without hydrodynamical calculations, and that it can be applied to high-resolution elevation models and for large areas. The large number of flood scenarios and the high horizontal resolution reduces at least part of the uncertainties encountered in flood loss modelling. The approach with a low horizontal-resolution (100-m) for loss modelling results in an overestimation of losses by up to 22% for high density urban areas, and underestimation of 100% for infrastructure, compared to the high-resolution (25-m). Loss modelling at 5-m horizontal resolution shows that aggregate losses may be overestimated by some 4.3%, compared to the 25-m resolution. The generation of a large variety of inundation scenarios provides a basis for constructing loss probability curves. The calculated range and expected values of damages compare reasonably well with earlier independent estimates

    Potential of semi-structural and non-structural adaptation strategies to reduce future flood risk: Case study for the Meuse

    Get PDF
    Flood risk throughout Europe has increased in the last few decades, and is projected to increase further owing to continued development in flood-prone areas and climate change. In recent years, studies have shown that adequate undertaking of semi-structural and non-structural measures can considerably decrease the costs of floods for households. However, there is little insight into how such measures can decrease the risk beyond the local level, now and in the future. To gain such insights, a modelling framework using the Damagescanner model with land-use and inundation maps for 2000 and 2030 was developed and applied to the Meuse river basin, in the region of Limburg, in the southeast of the Netherlands. The research suggests that annual flood risk may increase by up to 185% by 2030 compared with 2000, as a result of combined land-use and climate changes. The independent contributions of climate change and land-use change to the simulated increase are 108% and 37%, respectively. The risk-reduction capacity of the implementation of spatial zoning measures, which are meant to limit and regulate developments in flood-prone areas, is between 25% and 45%. Mitigation factors applied to assess the potential impact of three mitigation strategies (dry-proofing, wet-proofing, and the combination of dry- and wet-proofing) in residential areas show that these strategies have a risk-reduction capacity of between 21% and 40%, depending on their rate of implementation. Combining spatial zoning and mitigation measures could reduce the total increase in risk by up to 60%. Policy implications of these results are discussed. They focus on the undertaking of effective mitigation measures, and possible ways to increase their implementation by households

    Long-term development and effectiveness of private flood mitigation measures: An analysis for the German part of the river Rhine

    Get PDF
    Flood mitigation measures implemented by private households have become an important component of contemporary integrated flood risk management in Germany and many other countries. Despite the growing responsibility of private households to contribute to flood damage reduction by means of private flood mitigation measures, knowledge on the long-term development of such measures, which indicates changes in vulnerability over time, and their effectiveness, is still scarce. To gain further insights into the long-term development, current implementation level and effectiveness of private flood mitigation measures, empirical data from 752 flood-prone households along the German part of the Rhine are presented. It is found that four types of flood mitigation measures developed gradually over time among flood-prone households, with severe floods being important triggers for an accelerated implementation. At present, still a large share of respondents has not implemented a single flood mitigation measure, despite the high exposure of the surveyed households to floods. The records of household's flood damage to contents and structure during two consecutive flood events with similar hazard characteristics in 1993 and 1995 show that an improved preparedness of the population led to substantially reduced damage during the latter event. Regarding the efficiency of contemporary integrated flood risk management, it is concluded that additional policies are required in order to further increase the level of preparedness of the flood-prone population. This especially concerns households in areas that are less frequently affected by flood events

    Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration

    Full text link
    In this paper, we consider the challenge of maximizing an unknown function f for which evaluations are noisy and are acquired with high cost. An iterative procedure uses the previous measures to actively select the next estimation of f which is predicted to be the most useful. We focus on the case where the function can be evaluated in parallel with batches of fixed size and analyze the benefit compared to the purely sequential procedure in terms of cumulative regret. We introduce the Gaussian Process Upper Confidence Bound and Pure Exploration algorithm (GP-UCB-PE) which combines the UCB strategy and Pure Exploration in the same batch of evaluations along the parallel iterations. We prove theoretical upper bounds on the regret with batches of size K for this procedure which show the improvement of the order of sqrt{K} for fixed iteration cost over purely sequential versions. Moreover, the multiplicative constants involved have the property of being dimension-free. We also confirm empirically the efficiency of GP-UCB-PE on real and synthetic problems compared to state-of-the-art competitors

    On the Prior Sensitivity of Thompson Sampling

    Full text link
    The empirically successful Thompson Sampling algorithm for stochastic bandits has drawn much interest in understanding its theoretical properties. One important benefit of the algorithm is that it allows domain knowledge to be conveniently encoded as a prior distribution to balance exploration and exploitation more effectively. While it is generally believed that the algorithm's regret is low (high) when the prior is good (bad), little is known about the exact dependence. In this paper, we fully characterize the algorithm's worst-case dependence of regret on the choice of prior, focusing on a special yet representative case. These results also provide insights into the general sensitivity of the algorithm to the choice of priors. In particular, with pp being the prior probability mass of the true reward-generating model, we prove O(T/p)O(\sqrt{T/p}) and O((1p)T)O(\sqrt{(1-p)T}) regret upper bounds for the bad- and good-prior cases, respectively, as well as \emph{matching} lower bounds. Our proofs rely on the discovery of a fundamental property of Thompson Sampling and make heavy use of martingale theory, both of which appear novel in the literature, to the best of our knowledge.Comment: Appears in the 27th International Conference on Algorithmic Learning Theory (ALT), 201

    Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits

    Get PDF
    International audienceIn this paper, we study the problem of estimating the mean values of all the arms uniformly well in the multi-armed bandit setting. If the variances of the arms were known, one could design an optimal sampling strategy by pulling the arms proportionally to their variances. However, since the distributions are not known in advance, we need to design adaptive sampling strategies to select an arm at each round based on the previous observed samples. We describe two strategies based on pulling the arms proportionally to an upper-bound on their variances and derive regret bounds for these strategies. %on the excess estimation error compared to the optimal allocation. We show that the performance of these allocation strategies depends not only on the variances of the arms but also on the full shape of their distributions

    Structural basis for membrane attack complex inhibition by CD59

    Get PDF
    CD59 is an abundant immuno-regulatory receptor that protects human cells from damage during complement activation. Here we show how the receptor binds complement proteins C8 and C9 at the membrane to prevent insertion and polymerization of membrane attack complex (MAC) pores. We present cryo-electron microscopy structures of two inhibited MAC precursors known as C5b8 and C5b9. We discover that in both complexes, CD59 binds the pore-forming β-hairpins of C8 to form an intermolecular β-sheet that prevents membrane perforation. While bound to C8, CD59 deflects the cascading C9 β-hairpins, rerouting their trajectory into the membrane. Preventing insertion of C9 restricts structural transitions of subsequent monomers and indirectly halts MAC polymerization. We combine our structural data with cellular assays and molecular dynamics simulations to explain how the membrane environment impacts the dual roles of CD59 in controlling pore formation of MAC, and as a target of bacterial virulence factors which hijack CD59 to lyse human cells

    Bayesian Best-Arm Identification for Selecting Influenza Mitigation Strategies

    Full text link
    Pandemic influenza has the epidemic potential to kill millions of people. While various preventive measures exist (i.a., vaccination and school closures), deciding on strategies that lead to their most effective and efficient use remains challenging. To this end, individual-based epidemiological models are essential to assist decision makers in determining the best strategy to curb epidemic spread. However, individual-based models are computationally intensive and it is therefore pivotal to identify the optimal strategy using a minimal amount of model evaluations. Additionally, as epidemiological modeling experiments need to be planned, a computational budget needs to be specified a priori. Consequently, we present a new sampling technique to optimize the evaluation of preventive strategies using fixed budget best-arm identification algorithms. We use epidemiological modeling theory to derive knowledge about the reward distribution which we exploit using Bayesian best-arm identification algorithms (i.e., Top-two Thompson sampling and BayesGap). We evaluate these algorithms in a realistic experimental setting and demonstrate that it is possible to identify the optimal strategy using only a limited number of model evaluations, i.e., 2-to-3 times faster compared to the uniform sampling method, the predominant technique used for epidemiological decision making in the literature. Finally, we contribute and evaluate a statistic for Top-two Thompson sampling to inform the decision makers about the confidence of an arm recommendation
    corecore