4,830 research outputs found
How to effectively compute the reliability of a thermal-hydraulic nuclear passive system
International audienceThe computation of the reliability of a thermal-hydraulic (T-H) passive system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. The objective of this work is to provide operative guidelines to effectively handle the computation of the reliability of a nuclear passive system. Two directions of computation efficiency are considered: from one side, efficient Monte Carlo Simulation (MCS) techniques are indicated as a means to performing robust estimations with a limited number of samples: in particular, the Subset Simulation (SS) and Line Sampling (LS) methods are identified as most valuable; from the other side, fast-running, surrogate regression models (also called response surfaces or meta-models) are indicated as a valid replacement of the long-running T-H model codes: in particular, the use of bootstrapped Artificial Neural Networks (ANNs) is shown to have interesting potentials, including for uncertainty propagation.The recommendations drawn are supported by the results obtained in an illustrative application of literature
INTEGRATED DETERMINISTIC AND PROBABILISTIC SAFETY ANALYSIS: CONCEPTS, CHALLENGES, RESEARCH DIRECTIONS
International audienceIntegrated deterministic and probabilistic safety analysis (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives
Quantitative functional failure analysis of a thermal-hydraulic passive system by means of bootstrapped Artificial Neural Networks
International audienceThe estimation of the functional failure probability of a thermal-hydraulic (T-H) passive system can be done by Monte Carlo (MC) sampling of the epistemic uncertainties affecting the system model and the numerical values of its parameters, followed by the computation of the system response by a mechanistic T-H code, for each sample. The computational effort associated to this approach can be prohibitive because a large number of lengthy T-H code simulations must be performed (one for each sample) for accurate quantification of the functional failure probability and the related statistics. In this paper, the computational burden is reduced by replacing the long-running, original T-H code by a fast-running, empirical regression model: in particular, an Artificial Neural Network (ANN) model is considered. It is constructed on the basis of a limited-size set of data representing examples of the input/output nonlinear relationships underlying the original T-H code; once the model is built, it is used for performing, in an acceptable computational time, the numerous system response calculations needed for an accurate failure probability estimation, uncertainty propagation and sensitivity analysis. The empirical approximation of the system response provided by the ANN model introduces an additional source of (model) uncertainty, which needs to be evaluated and accounted for. A bootstrapped ensemble of ANN regression models is here built for quantifying, in terms of confidence intervals, the (model) uncertainties associated with the estimates provided by the ANNs. For demonstration purposes, an application to the functional failure analysis of an emergency passive decay heat removal system in a simple steady-state model of a Gas-cooled Fast Reactor (GFR) is presented. The functional failure probability of the system is estimated together with global Sobol sensitivity indices. The bootstrapped ANN regression model built with low computational time on few (e.g., 100) data examples is shown capable of providing reliable (very near to the true values of the quantities of interest) and robust (the confidence intervals are satisfactorily narrow around the true values of the quantities of interest) point estimates
Nuclear Power
At the onset of the 21st century, we are searching for reliable and sustainable energy sources that have a potential to support growing economies developing at accelerated growth rates, technology advances improving quality of life and becoming available to larger and larger populations. The quest for robust sustainable energy supplies meeting the above constraints leads us to the nuclear power technology. Today's nuclear reactors are safe and highly efficient energy systems that offer electricity and a multitude of co-generation energy products ranging from potable water to heat for industrial applications. Catastrophic earthquake and tsunami events in Japan resulted in the nuclear accident that forced us to rethink our approach to nuclear safety, requirements and facilitated growing interests in designs, which can withstand natural disasters and avoid catastrophic consequences. This book is one in a series of books on nuclear power published by InTech. It consists of ten chapters on system simulations and operational aspects. Our book does not aim at a complete coverage or a broad range. Instead, the included chapters shine light at existing challenges, solutions and approaches. Authors hope to share ideas and findings so that new ideas and directions can potentially be developed focusing on operational characteristics of nuclear power plants. The consistent thread throughout all chapters is the "system-thinking" approach synthesizing provided information and ideas. The book targets everyone with interests in system simulations and nuclear power operational aspects as its potential readership groups - students, researchers and practitioners
Nuclear Power - System Simulations and Operation
At the onset of the 21st century, we are searching for reliable and sustainable energy sources that have a potential to support growing economies developing at accelerated growth rates, technology advances improving quality of life and becoming available to larger and larger populations. The quest for robust sustainable energy supplies meeting the above constraints leads us to the nuclear power technology. Today's nuclear reactors are safe and highly efficient energy systems that offer electricity and a multitude of co-generation energy products ranging from potable water to heat for industrial applications. Catastrophic earthquake and tsunami events in Japan resulted in the nuclear accident that forced us to rethink our approach to nuclear safety, requirements and facilitated growing interests in designs, which can withstand natural disasters and avoid catastrophic consequences. This book is one in a series of books on nuclear power published by InTech. It consists of ten chapters on system simulations and operational aspects. Our book does not aim at a complete coverage or a broad range. Instead, the included chapters shine light at existing challenges, solutions and approaches. Authors hope to share ideas and findings so that new ideas and directions can potentially be developed focusing on operational characteristics of nuclear power plants. The consistent thread throughout all chapters is the system-thinking approach synthesizing provided information and ideas. The book targets everyone with interests in system simulations and nuclear power operational aspects as its potential readership groups - students, researchers and practitioners
Approaching Dynamic PSA within CANDU 6 NPP
The outline of this dissertation is going to present the applications that are the subject of the work and also the lay down of work content.
Chapter 1 reviews the conventional PSA main concepts, summarizes a short introduction history of Dynamic PSA (DPSA) and presents a non-exhaustive DPSA state-of-the-art with the recent and future developments.
Chapter 2 presents the first application of the thesis, which is actually an introduction in the context of the Integrated Dynamic Decision Analysis (IDDA) code, that represents the main tool used in the attempt of approaching the Dynamic PSA.
Starting from a description that reflects the level of knowledge about the system, IDDA code is able to develop all the scenarios of events compatible with the description received, from both points of view: either logical construction, or probabilistic coherence. By describing the system configuration and operation in a logically consistent manner, all the information is worked out by the code and is made available to the analyst as results in terms of system unavailability, minimal cut sets, uncertainty associated. The code allows also the association of different consequences that could be of interest for the analyst. The consequences could be of any type, such as economical, equipment outage time, etc.; for instance it can be considered an outage time for certain components of the system and then is calculated the “expected risk”. The association of consequences provides the inputs for a good decision making process.
Chapter 3 represents the core applications of the present work. The applications purpose is the coupling between the logic probabilistics of the system or plant and associated phenomenology of primary heat transport system of a generic CANDU 6 NPP.
First application is the coupling between the logic-probabilistic model of EWS system and associated phenomenology of primary heat transport system of CANDU 6 NPP. The considered plant transient is the total Loss of Main Feed-water with or without the coincident failure of the Emergency Water Supply System.
The second application is considering the CANDU 6 Station Blackout as plant transient-consequential condition, moreover the loss of all AC power sources existing on the site. The transient scenarios development consider the possibility to recover the offsite grid and the use of mobile diesel generators in order to mitigate the accident consequences. The purpose is to challenge the plant design and response and to check if the plant conditions of a severe accident are reached. The plant response is challenged for short and long periods of time.
The IDDA code allows interfacing the logic-probabilistic model of the system with the plant response in time, therefore with the evolution in time of the plant process variables. This allows raising sequences of possible events related in cause-consequence reasoning, each one giving place to a scenario with its development and its consequences. Therefore this allows acquiring the knowledge not only of which sequences of events are taking place, but also of the real environment in which they are taking place.
Associating the system sequences that lead to system unavailability on demand with the resulting phenomenology proves to be a useful tool for the decision making process, both in the design phase and for the entire power plant life time.
Chapter 4 presents future possible applications that could be developed with the present Dynamic PSA approach. A particular application could be the optimization or development of robust plant emergency operating procedures. In fact it consists in the coupling between the logic-probabilistics of the plant configurations corresponding to the Emergency Operating Procedure (EOP) and the associated phenomenology of the primary heat transport systems with the consideration for the plant safety systems.
The application could highlight those situations where the plant fails either because of hardware failures or system dynamics and furthermore to reveal those situations where changing of the hardware states brings the process variables of the system state out of the system domain.
A timeline course should be created for the process variables characterizing the plant state and that should reveal the time windows that operators have at disposition for intervention, in order to avoid potentially catastrophic conditions. Some week points in the EOP could be identified and then resolutions to be provided for their improvement, on the basis of sensitivity analyses.
Chapter 5 presents the conclusions and the insights of the work and outlines possible improvements in terms of the present methodology proposed
A multi-resolution, non-parametric, Bayesian framework for identification of spatially-varying model parameters
This paper proposes a hierarchical, multi-resolution framework for the
identification of model parameters and their spatially variability from noisy
measurements of the response or output. Such parameters are frequently
encountered in PDE-based models and correspond to quantities such as density or
pressure fields, elasto-plastic moduli and internal variables in solid
mechanics, conductivity fields in heat diffusion problems, permeability fields
in fluid flow through porous media etc. The proposed model has all the
advantages of traditional Bayesian formulations such as the ability to produce
measures of confidence for the inferences made and providing not only
predictive estimates but also quantitative measures of the predictive
uncertainty. In contrast to existing approaches it utilizes a parsimonious,
non-parametric formulation that favors sparse representations and whose
complexity can be determined from the data. The proposed framework in
non-intrusive and makes use of a sequence of forward solvers operating at
various resolutions. As a result, inexpensive, coarse solvers are used to
identify the most salient features of the unknown field(s) which are
subsequently enriched by invoking solvers operating at finer resolutions. This
leads to significant computational savings particularly in problems involving
computationally demanding forward models but also improvements in accuracy. It
is based on a novel, adaptive scheme based on Sequential Monte Carlo sampling
which is embarrassingly parallelizable and circumvents issues with slow mixing
encountered in Markov Chain Monte Carlo schemes
Recommended from our members
Evaluating enhanced hydrological representations in Noah LSM over transition zones : an ensemble-based approach to model diagnostics
textThis work introduces diagnostic methods for land surface model (LSM) evaluation that enable developers to identify structural shortcomings in model parameterizations by evaluating model 'signatures' (characteristic temporal and spatial patterns of behavior) in feature, cost-function, and parameter spaces. The ensemble-based methods allow researchers to draw conclusions about hypotheses and model realism that are independent of parameter choice. I compare the performance and physical realism of three versions of Noah LSM (a benchmark standard version [STD], a dynamic-vegetation enhanced version [DV], and a groundwater-enabled one [GW]) in simulating high-frequency near-surface states and land-to-atmosphere fluxes in-situ and over a catchment at high-resolution in the U.S. Southern Great Plains, a transition zone between humid and arid climates. Only at more humid sites do the more conceptually realistic, hydrologically enhanced LSMs (DV and GW) ameliorate biases in the estimation of root-zone moisture change and evaporative fraction. Although the improved simulations support the hypothesis that groundwater and vegetation processes shape fluxes in transition zones, further assessment of the timing and partitioning of the energy and water cycles indicates improvements to the movement of water within the soil column are needed. Distributed STD and GW underestimate the contribution of baseflow and simulate too-flashy streamflow. This work challenges common practices and assumptions in LSM development and offers researchers more stringent model evaluation methods. I show that, because of equifinality, ad-hoc evaluation using single parameter sets provides insufficient information for choosing among competing parameterizations, for addressing hypotheses under uncertainty, or for guiding model development. Posterior distributions of physically meaningful parameters differ between models and sites, and relationships between parameters themselves change. 'Plug and play' of modules and partial calibration likely introduce error and should be re-examined. Even though LSMs are 'physically based,' model parameters are effective and scale-, site- and model-dependent. Parameters are not functions of soil or vegetation type alone: they likely depend in part on climate and cannot be assumed to be transferable between sites with similar physical characteristics. By helping bridge the gap between the model identification and model development, this research contributes to the continued improvement of our understanding and modeling of environmental processes.Geological Science
The maturing concept of estimating project cost contingency: A review
Contingency is a ubiquitous component of project cost estimating. This paper provides a review of the literature pertaining to the estimating of project cost contingency. It describes the flaws of the tradition percentage method for estimating project cost contingency and sets out more robust estimation methods - regression analysis, Monte Carlo simulation and artificial neutral networks. In particular, the application of regression analysis for predicting project cost contingency is reviewed in detail as a prelude to the author’s research into the development and testing of a regression model for forecasting project cost contingency for engineering construction projects
- …