67 research outputs found

    Machine-learning blends of geomorphic descriptors: value and limitations for flood hazard assessment across large floodplains

    Get PDF
    Recent literature shows several examples of simplified approaches that perform flood hazard (FH) assessment and mapping across large geographical areas on the basis of fast-computing geomorphic descriptors. These approaches may consider a single index (univariate) or use a set of indices simultaneously (multivariate). What is the potential and accuracy of multivariate approaches relative to univariate ones? Can we effectively use these methods for extrapolation purposes, i.e., FH assessment outside the region used for setting up the model? Our study addresses these open problems by considering two separate issues: (1) mapping flood-prone areas and (2) predicting the expected water depth for a given inundation scenario. We blend seven geomorphic descriptors through decision tree models trained on target FH maps, referring to a large study area (∼ 105 km2). We discuss the potential of multivariate approaches relative to the performance of a selected univariate model and on the basis of multiple extrapolation experiments, where models are tested outside their training region. Our results show that multivariate approaches may (a) significantly enhance flood-prone area delineation (accuracy: 92%) relative to univariate ones (accuracy: 84%), (b) provide accurate predictions of expected inundation depths (determination coefficient ∼0.7), and (c) produce encouraging results in extrapolation

    Prediction of streamflow regimes over large geographical areas: interpolated flow–duration curves for the Danube region

    Get PDF
    ABSTRACTFlow–duration curves (FDCs) are essential to support decisions on water resources management, and their regionalization is fundamental for the assessment of ungauged basins. In comparison with calibrated rainfall–runoff models, statistical methods provide data-driven estimates representing a useful benchmark. The objective of this work is the interpolation of FDCs from ~500 discharge gauging stations in the Danube. To this aim we use total negative deviation top-kriging (TNDTK), as multi-regression models are shown to be unsuitable for representing FDCs across all durations and sites. TNDTK shows a high accuracy for the entire Danube region, with overall Nash-Sutcliffe efficiency values computed in a leave-p-out cross-validation scheme (p equal to one site, one-third and half of the sites), all above 0.88. A reliability measure based on kriging variance is attached to each interpolated FDC at ~4000 prediction nodes. The GIS layer of regionalized FDCs is made available for broader use in the region

    Streamflow data availability in Europe: a detailed dataset of interpolated flow-duration curves

    Get PDF
    For about 24 000 river basins across Europe, we provide a continuous representation of the stream-flow regime in terms of empirical flow-duration curves (FDCs), which are key signatures of the hydrological behaviour of a catchment and are widely used for supporting decisions on water resource management as well as for assessing hydrologic change. In this study, FDCs are estimated by means of the geostatistical procedure termed total negative deviation top-kriging (TNDTK), starting from the empirical FDCs made available by the Joint Research Centre of the European Commission (DG-JRC) for about 3000 discharge measurement stations across Europe. Consistent with previous studies, TNDTK is shown to provide high accuracy for the entire study area, even with different degrees of reliability, which varies significantly over the study area. In order to provide this kind of information site by site, together with the estimated FDCs, for each catchment we provide indicators of the accuracy and reliability of the performed large-scale geostatistical prediction. The dataset is freely available at the PANGAEA open-access library (Data Publisher for Earth & Environmental Science) at https://doi.org/10.1594/PANGAEA.938975 (Persiano et al., 2021b)

    Safer_RAIN: A DEM-based hierarchical filling-&-spilling algorithm for pluvial flood hazard assessment and mapping across large urban areas

    Get PDF
    The increase in frequency and intensity of extreme precipitation events caused by the changing climate (e.g., cloudbursts, rainstorms, heavy rainfall, hail, heavy snow), combined with the high population density and concentration of assets, makes urban areas particularly vulnerable to pluvial flooding. Hence, assessing their vulnerability under current and future climate scenarios is of paramount importance. Detailed hydrologic-hydraulic numerical modeling is resource intensive and therefore scarcely suitable for performing consistent hazard assessments across large urban settlements. Given the steadily increasing availability of LiDAR (Light Detection And Ranging) high-resolution DEMs (Digital Elevation Models), several studies highlighted the potential of fast-processing DEM-based methods, such as the Hierarchical Filling-&-Spilling or Puddle-to-Puddle Dynamic Filling-&-Spilling Algorithms (abbreviated herein as HFSAs). We develop a fast-processing HFSA, named Safer_RAIN, that enables mapping of pluvial flooding in large urban areas by accounting for spatially distributed rainfall input and infiltration processes through a pixel-based Green-Ampt model. We present the first applications of the algorithm to two case studies in Northern Italy. Safer_RAIN output is compared against ground evidence and detailed output from a two-dimensional (2D) hydrologic and hydraulic numerical model (overall index of agreement between Safer_RAIN and 2D benchmark model: sensitivity and specificity up to 71% and 99%, respectively), highlighting potential and limitations of the proposed algorithm for identifying pluvial flood-hazard hotspots across large urban environments

    Lower Bounds for Encrypted Multi-Maps and Searchable Encryption in the Leakage Cell Probe Model

    Get PDF
    Encrypted multi-maps (EMMs) enable clients to outsource the storage of a multi-map to a potentially untrusted server while maintaining the ability to perform operations in a privacy-preserving manner. EMMs are an important primitive as they are an integral building block for many practical applications such as searchable encryption and encrypted databases. In this work, we formally examine the tradeoffs between privacy and efficiency for EMMs. Currently, all known dynamic EMMs with constant overhead reveal if two operations are performed on the same key or not that we denote as the global key-equality pattern\mathit{global\ key\text{-}equality\ pattern}. In our main result, we present strong evidence that the leakage of the global key-equality pattern is inherent for any dynamic EMM construction with O(1)O(1) efficiency. In particular, we consider the slightly smaller leakage of decoupled key-equality pattern\mathit{decoupled\ key\text{-}equality\ pattern} where leakage of key-equality between update and query operations is decoupled and the adversary only learns whether two operations of the same type\mathit{same\ type} are performed on the same key or not. We show that any EMM with at most decoupled key-equality pattern leakage incurs Ω(logn)\Omega(\log n) overhead in the leakage cell probe model\mathit{leakage\ cell\ probe\ model}. This is tight as there exist ORAM-based constructions of EMMs with logarithmic slowdown that leak no more than the decoupled key-equality pattern (and actually, much less). Furthermore, we present stronger lower bounds that encrypted multi-maps leaking at most the decoupled key-equality pattern but are able to perform one of either the update or query operations in the plaintext still require Ω(logn)\Omega(\log n) overhead. Finally, we extend our lower bounds to show that dynamic, response-hiding\mathit{response\text{-}hiding} searchable encryption schemes must also incur Ω(logn)\Omega(\log n) overhead even when one of either the document updates or searches may be performed in the plaintext

    Lower Bounds for Multi-Server Oblivious RAMs

    Get PDF
    In this work, we consider the construction of oblivious RAMs (ORAM) in a setting with multiple servers and the adversary may corrupt a subset of the servers. We present an Ω(logn)\Omega(\log n) overhead lower bound for any kk-server ORAM that limits any PPT adversary to distinguishing advantage at most 1/4k1/4k when only one server is corrupted. In other words, if one insists on negligible distinguishing advantage, then multi-server ORAMs cannot be faster than single-server ORAMs even with polynomially many servers of which only one unknown server is corrupted. Our results apply to ORAMs that may err with probability at most 1/1281/128 as well as scenarios where the adversary corrupts larger subsets of servers. We also extend our lower bounds to other important data structures including oblivious stacks, queues, deques, priority queues and search trees

    Cryptology and Network Security

    No full text

    Lower bounds for encrypted multi-maps and searchable encryption in the leakage cell probe model

    No full text
    Encrypted multi-maps (EMMs) enable clients to outsource the storage of a multi-map to a potentially untrusted server while maintaining the ability to perform operations in a privacy-preserving manner. EMMs are an important primitive as they are an integral building block for many practical applications such as searchable encryption and encrypted databases. In this work, we formally examine the tradeoffs between privacy and efficiency for EMMs. Currently, all known dynamic EMMs with constant overhead reveal if two operations are performed on the same key or not that we denote as the global key-equality pattern. In our main result, we present strong evidence that the leakage of the global key-equality pattern is inherent for any dynamic EMM construction with O(1) efficiency. In particular, we consider the slightly smaller leakage of decoupled key-equality pattern where leakage of key-equality between update and query operations is decoupled and the adversary only learns whether two operations of the same type are performed on the same key or not. We show that any EMM with at most decoupled key-equality pattern leakage incurs Ω (lg n) overhead in the leakage cell probe model. This is tight as there exist ORAM-based constructions of EMMs with logarithmic slowdown that leak no more than the decoupled key-equality pattern (and actually, much less). Furthermore, we present stronger lower bounds that encrypted multi-maps leaking at most the decoupled key-equality pattern but are able to perform one of either the update or query operations in the plaintext still require Ω (lg n) overhead. Finally, we extend our lower bounds to show that dynamic, response-hiding searchable encryption schemes must also incur Ω (lg n) overhead even when one of either the document updates or searches may be performed in the plaintext

    Third International Conference Security in Communication Networks

    No full text
    This book contains the papers accepted for publication, after a peer-review process, to the Third International Conference on Security in Communication Networks that was held in Amalfi (SA) on September 12-13, 2002. Some of the topics covered in this venue are Digital signatures, zero knowledge proof systems, secret sharing schemes, cryptanalysis
    corecore