5,592 research outputs found

    Evaluation of Earth Dam Leakage Considering the Uncertainty in Soil ‎Hydraulic Parameters

    Get PDF
    Analysis of earth dams is generally conducted in three stages of stability, deformability and water penetration analysis. Lack of sufficient attention to leakage, as one of the most important issues, leads to erosion and destruction of slope stability. The aim of current paper is to analyze the earth dam leakage with respect to the existing uncertainty in soil hydraulic parameters. In this research, the Monte Carlo (MC) method was used to calculate soil hydraulic parameters. Using these parameters, analysis of Alborz earth dam leakage by means of SEEP/W model based on the finite elements method was investigated. Due to the hydraulic conditions of the core soil, the total head value, pore water pressure, and water flux in core region will change. The results indicate that uncertainty in the hydraulic parameters of Alborz earth dam are significant, thus risk is important in this dam. The application of the proposed methodology in estimation of leakage from Alborz earth dam in Mazandaran province reveals its efficiency and proper accuracy in predicting the amount of leakage flow in earth dams with respect to the possible changes in the hydraulic parameters of the soil. Moreover, it was found that the quantity of seepage increases considerably when the dam is without core, therefore, the core is very necessary to decrease the value of seepage through the earth dam

    Methodology for estimating the probability of failure by sliding in concrete gravity dams in the context of risk analysis

    Full text link
    Dam safety based on risk analysis methodologies demand quantification of the risk of the dam-reservoir system. This means that, for a given initial state of the system, and for the several failure modes considered, it is necessary to estimate the probability of the load events and the conditional probability of response of the system for a given load event, as well as estimating the consequences on the environment for the obtained response of the system. The following paper focuses in the second of these probabilities, that is, quantifying the conditional probability of response of the system, for a given load event, and for the specific case of concrete gravity dams. Dam-reservoir systems have a complex behavior which has been tackled traditionally by simplifications in the formulation of the models and adoption of safety factors. The purpose of the methodology described in this paper is to improve the estimation of the conditional probability of response of the dam-reservoir system for concrete gravity dams, using complex behavior models based on numerical simulation techniques, together with reliability techniques of different levels of precision are applied, including Level 3 reliability techniques with Monte Carlo simulation. The paper includes an example of application of the proposed methodology to a Spanish concrete gravity dam, considering the failure mode of sliding along the rock-concrete interface. In the context of risk analysis, the results obtained for conditional probability of failure allow several conclusions related to their validity and safety implications that acquire a significant relevance due to the innovation of the study performedAltarejos García, L.; Escuder Bueno, I.; Serrano Lombillo, AJ.; Gómez De Membrillera Ortuño, M. (2012). Methodology for estimating the probability of failure by sliding in concrete gravity dams in the context of risk analysis. Structural Safety. 34(1):1-13. https://doi.org/10.1016/j.strusafe.2012.01.001S11334

    Stochastic simulation methods for structural reliability under mixed uncertainties

    Get PDF
    Uncertainty quantification (UQ) has been widely recognized as one of the most important, yet challenging task in both structural engineering and system engineering, and the current researches are mainly on the proper treatment of different types of uncertainties, resulting from either natural randomness or lack of information, in all related sub-problems of UQ such as uncertainty characterization, uncertainty propagation, sensitivity analysis, model updating, model validation, risk and reliability analysis, etc. It has been widely accepted that those uncertainties can be grouped as either aleatory uncertainty or epistemic uncertainty, depending on whether they are reducible or not. For dealing with the above challenge, many non-traditional uncertainty characterization models have been developed, and those models can be grouped as either imprecise probability models (e.g., probability-box model, evidence theory, second-order probability model and fuzzy probability model) or non-probabilistic models (e.g., interval/convex model and fuzzy set theory). This thesis concerns the efficient numerical propagation of the three kinds of uncertainty characterization models, and for simplicity, the precise probability model, the distribution probability-box model, and the interval model are taken as examples. The target is to develop efficient numerical algorithms for learning the functional behavior of the probabilistic responses (e.g., response moments and failure probability) with respect to the epistemic parameters of model inputs, which is especially useful for making reliable decisions even when the available information on model inputs is imperfect. To achieve the above target, my thesis presents three main developments for improving the Non-intrusive Imprecise Stochastic Simulation (NISS), which is a general methodology framework for propagating the imprecise probability models with only one stochastic simulation. The first development is on generalizing the NISS methods to the problems with inputs including both imprecise probability models and non-probability models. The algorithm is established by combining Bayes rule and kernel density estimation. The sensitivity indices of the epistemic parameters are produced as by-products. The NASA Langley UQ challenge is then successfully solved by using the generalized NISS method. The second development is to inject the classical line sampling to the NISS framework so as to substantially improve the efficiency of the algorithm for rare failure event analysis, and two strategies, based on different interpretations of line sampling, are developed. The first strategy is based on the hyperplane approximations, while the second-strategy is derived based on the one-dimensional integrals. Both strategies can be regarded as post-processing of the classical line sampling, while the results show that their resultant NISS estimators have different performance. The third development aims at further substantially improving the efficiency and suitability to highly nonlinear problems of line sampling, for complex structures and systems where one deterministic simulation may take hours. For doing this, the active learning strategy based on Gaussian process regression is embedded into the line sampling procedure for accurately estimating the interaction point for each sample line, with only a small number of deterministic simulations. The above three developments have largely improved the suitability and efficiency of the NISS methods, especially for real-world engineering applications. The efficiency and effectiveness of those developments are clearly interpreted with toy examples and sufficiently demonstrated by real-world test examples in system engineering, civil engineering, and mechanical engineering

    Specific heat anomaly in a supercooled liquid with amorphous boundary conditions

    Get PDF
    We study the specific heat of a model supercooled liquid confined in a spherical cavity with amorphous boundary conditions. We find the equilibrium specific heat has a cavity-size-dependent peak as a function of temperature. The cavity allows us to perform a finite-size scaling (FSS) analysis, which indicates that the peak persists at a finite temperature in the thermodynamic limit. We attempt to collapse the data onto a FSS curve according to different theoretical scenarios, obtaining reasonable results in two cases: a "not-so-simple" liquid with nonstandard values of the exponents {\alpha} and {\nu}, and random first-order theory, with two different length scales.Comment: Includes Supplemental Materia

    Truncating Trajectories in Monte Carlo Reinforcement Learning

    Get PDF
    In Reinforcement Learning (RL), an agent acts in an unknown environment to maximize the expected cumulative discounted sum of an external reward signal, i.e., the expected return. In practice, in many tasks of interest, such as policy optimization, the agent usually spends its interaction budget by collecting episodes of fixed length within a simulator (i.e., Monte Carlo simulation). However, given the discounted nature of the RL objective, this data collection strategy might not be the best option. Indeed, the rewards taken in early simulation steps weigh exponentially more than future rewards. Taking a cue from this intuition, in this paper, we design an a-priori budget allocation strategy that leads to the collection of trajectories of different lengths, i.e., truncated. The proposed approach provably minimizes the width of the confidence intervals around the empirical estimates of the expected return of a policy. After discussing the theoretical properties of our method, we make use of our trajectory truncation mechanism to extend Policy Optimization via Importance Sampling (POIS, Metelli et al., 2018) algorithm. Finally, we conduct a numerical comparison between our algorithm and POIS: the results are consistent with our theory and show that an appropriate truncation of the trajectories can succeed in improving performance

    Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology

    Get PDF
    The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as ‘deterministic components’ or ‘trends’ even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures
    • 

    corecore