72 research outputs found

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Generalized asset integrity games

    Get PDF
    Generalized assets represent a class of multi-scale adaptive state-transition systems with domain-oblivious performance criteria. The governance of such assets must proceed without exact specifications, objectives, or constraints. Decision making must rapidly scale in the presence of uncertainty, complexity, and intelligent adversaries. This thesis formulates an architecture for generalized asset planning. Assets are modelled as dynamical graph structures which admit topological performance indicators, such as dependability, resilience, and efficiency. These metrics are used to construct robust model configurations. A normalized compression distance (NCD) is computed between a given active/live asset model and a reference configuration to produce an integrity score. The utility derived from the asset is monotonically proportional to this integrity score, which represents the proximity to ideal conditions. The present work considers the situation between an asset manager and an intelligent adversary, who act within a stochastic environment to control the integrity state of the asset. A generalized asset integrity game engine (GAIGE) is developed, which implements anytime algorithms to solve a stochastically perturbed two-player zero-sum game. The resulting planning strategies seek to stabilize deviations from minimax trajectories of the integrity score. Results demonstrate the performance and scalability of the GAIGE. This approach represents a first-step towards domain-oblivious architectures for complex asset governance and anytime planning

    Failure Probability Estimation and Detection of Failure Surfaces via Adaptive Sequential Decomposition of the Design Domain

    Full text link
    We propose an algorithm for an optimal adaptive selection of points from the design domain of input random variables that are needed for an accurate estimation of failure probability and the determination of the boundary between safe and failure domains. The method is particularly useful when each evaluation of the performance function g(x) is very expensive and the function can be characterized as either highly nonlinear, noisy, or even discrete-state (e.g., binary). In such cases, only a limited number of calls is feasible, and gradients of g(x) cannot be used. The input design domain is progressively segmented by expanding and adaptively refining mesh-like lock-free geometrical structure. The proposed triangulation-based approach effectively combines the features of simulation and approximation methods. The algorithm performs two independent tasks: (i) the estimation of probabilities through an ingenious combination of deterministic cubature rules and the application of the divergence theorem and (ii) the sequential extension of the experimental design with new points. The sequential selection of points from the design domain for future evaluation of g(x) is carried out through a new learning function, which maximizes instantaneous information gain in terms of the probability classification that corresponds to the local region. The extension may be halted at any time, e.g., when sufficiently accurate estimations are obtained. Due to the use of the exact geometric representation in the input domain, the algorithm is most effective for problems of a low dimension, not exceeding eight. The method can handle random vectors with correlated non-Gaussian marginals. The estimation accuracy can be improved by employing a smooth surrogate model. Finally, we define new factors of global sensitivity to failure based on the entire failure surface weighted by the density of the input random vector.Comment: 42 pages, 24 figure

    Measuring farmers’ risk and uncertainty attitudes: an interval prospect experiment

    Get PDF
    Attitudes to risk have generated a lot of attention over the years due to its vital importance in decision-making processes that are necessary for life and livelihoods. Attitudes towards uncertainty have received less attention even though arguably most important decisions are under uncertainty rather than risk. In addition, many studies modelling attitudes to risk have adopted experiments that place significant cognitive burden on respondents. Crucially, they are also framed in a way that do not reflect everyday problems. Specifically, the most common way of eliciting attitudes is to ask decision makers to choose between discrete monetary lotteries with known probabilities attached to the payoffs. Yet, arguably, the vast majority of choices that people make in their day-to-day lives are with respect to continuous non-monetary outcomes. To address these gaps, this thesis investigates responses to continuous ‘prospects’ across different conditions (risk & uncertainty), contexts (monetary & time) and content domains (gain, loss & mixed). Further, this thesis examines the link between attitudes to risk/uncertainty and mental health related factors and the effect of attitudes to risk and uncertainty on farmers’ decisions both for themselves and for others. This thesis uses both non-parametric methods - relating to the patterns that characterise participants’ choices and their determinants; and parametric models – based upon cumulative prospect theory (CPT) as it extends to continuous prospects. The data were gathered using lab-in-field experiments in which Nigerian farmer’s chose between pairs of prospects with continuous distributions, which were not exclusively monetary in nature. Attitudes towards risk, as opposed to uncertainty were elicited by specifying that all outcomes over the specified interval were ‘equally likely’ (thus a uniform probability density). Uncertainty was specified by indicating to farmers that one outcome within the specified interval would be realised but without the specification of an associated probability density. Key findings are that attitudes differ under different conditions, contexts and content domains. Using continuous prospects, respondents did not treat equally likely outcomes as ‘equally likely’ and appear to demonstrate cumulative probability distribution warping consistent with the CPT. However, there were behaviours that are difficult to reconcile with CPT such as the preferences of many respondents could only be modelled using “extreme curvature” of the value function. This was induced by what we term negligible gain avoidance (i.e. avoiding prospects with zero lower bound in the gain domain) or negligible loss seeking (i.e. preferring prospects with zero upper bound in the loss domain) behaviours. CPT, Salience theory, Heuristics and other theories examined in this study could not alone explain these behaviours. Results from investigating the effect of bipolar disorder tendencies (BD) on risk attitudes show that BD significantly affects the shape of the value and probability weighting functions; and farmers that have BD are more likely to make random choices. Other results show that risk aversion for losses increases participation in off-farm income generating activities; and that farmers’ likelihood to engage in specific types of offfarm activities is determined by their risk and uncertainty attitudes

    Proceedings of the Fifth Workshop on Information Theoretic Methods in Science and Engineering

    Get PDF
    These are the online proceedings of the Fifth Workshop on Information Theoretic Methods in Science and Engineering (WITMSE), which was held in the Trippenhuis, Amsterdam, in August 2012

    Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 1: Theory

    Full text link
    In nuclear reactor system design and safety analysis, the Best Estimate plus Uncertainty (BEPU) methodology requires that computer model output uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. "Expert opinion" and "user self-evaluation" have been widely used to specify computer model input uncertainties in previous uncertainty, sensitivity and validation studies. Inverse Uncertainty Quantification (UQ) is the process to inversely quantify input uncertainties based on experimental data in order to more precisely quantify such ad-hoc specifications of the input uncertainty information. In this paper, we used Bayesian analysis to establish the inverse UQ formulation, with systematic and rigorously derived metamodels constructed by Gaussian Process (GP). Due to incomplete or inaccurate underlying physics, as well as numerical approximation errors, computer models always have discrepancy/bias in representing the realities, which can cause over-fitting if neglected in the inverse UQ process. The model discrepancy term is accounted for in our formulation through the "model updating equation". We provided a detailed introduction and comparison of the full and modular Bayesian approaches for inverse UQ, as well as pointed out their limitations when extrapolated to the validation/prediction domain. Finally, we proposed an improved modular Bayesian approach that can avoid extrapolating the model discrepancy that is learnt from the inverse UQ domain to the validation/prediction domain.Comment: 27 pages, 10 figures, articl
    corecore