16,771 research outputs found

    Using graphical models and multi-attribute utility theory for probabilistic uncertainty handling in large systems, with application to nuclear emergency management

    Get PDF
    Although many decision-making problems involve uncertainty, uncertainty handling within large decision support systems (DSSs) is challenging. One domain where uncertainty handling is critical is emergency response management, in particular nuclear emergency response, where decision making takes place in an uncertain, dynamically changing environment. Assimilation and analysis of data can help to reduce these uncertainties, but it is critical to do this in an efficient and defensible way. After briefly introducing the structure of a typical DSS for nuclear emergencies, the paper sets up a theoretical structure that enables a formal Bayesian decision analysis to be performed for environments like this within a DSS architecture. In such probabilistic DSSs many input conditional probability distributions are provided by different sets of experts overseeing different aspects of the emergency. These probabilities are then used by the decision maker (DM) to find her optimal decision. We demonstrate in this paper that unless due care is taken in such a composite framework, coherence and rationality may be compromised in a sense made explicit below. The technology we describe here builds a framework around which Bayesian data updating can be performed in a modular way, ensuring both coherence and efficiency, and provides sufficient unambiguous information to enable the DM to discover her expected utility maximizing policy

    Connecting two theories of imprecise probability

    Get PDF

    Approximate Models and Robust Decisions

    Full text link
    Decisions based partly or solely on predictions from probabilistic models may be sensitive to model misspecification. Statisticians are taught from an early stage that "all models are wrong", but little formal guidance exists on how to assess the impact of model approximation on decision making, or how to proceed when optimal actions appear sensitive to model fidelity. This article presents an overview of recent developments across different disciplines to address this. We review diagnostic techniques, including graphical approaches and summary statistics, to help highlight decisions made through minimised expected loss that are sensitive to model misspecification. We then consider formal methods for decision making under model misspecification by quantifying stability of optimal actions to perturbations to the model within a neighbourhood of model space. This neighbourhood is defined in either one of two ways. Firstly, in a strong sense via an information (Kullback-Leibler) divergence around the approximating model. Or using a nonparametric model extension, again centred at the approximating model, in order to `average out' over possible misspecifications. This is presented in the context of recent work in the robust control, macroeconomics and financial mathematics literature. We adopt a Bayesian approach throughout although the methods are agnostic to this position

    On coherent immediate prediction: connecting two theories of imprecise probability

    Get PDF
    We give an overview of two approaches to probabiliity theory where lower and upper probabilities, rather than probabilities, are used: Walley's behavioural theory of imprecise probabilities, and Shafer and Vovk's game-theoretic account of probability. We show that the two theories are more closely related than would be suspected at first sight, and we establish a correspondence between them that (i) has an interesting interpretation, and (ii) allows us to freely import results from one theory into the other. Our approach leads to an account of immediate prediction in the framework of Walley's theory, and we prove an interesting and quite general version of the weak law of large numbers

    Portfolio selection models: A review and new directions

    Get PDF
    Modern Portfolio Theory (MPT) is based upon the classical Markowitz model which uses variance as a risk measure. A generalization of this approach leads to mean-risk models, in which a return distribution is characterized by the expected value of return (desired to be large) and a risk value (desired to be kept small). Portfolio choice is made by solving an optimization problem, in which the portfolio risk is minimized and a desired level of expected return is specified as a constraint. The need to penalize different undesirable aspects of the return distribution led to the proposal of alternative risk measures, notably those penalizing only the downside part (adverse) and not the upside (potential). The downside risk considerations constitute the basis of the Post Modern Portfolio Theory (PMPT). Examples of such risk measures are lower partial moments, Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). We revisit these risk measures and the resulting mean-risk models. We discuss alternative models for portfolio selection, their choice criteria and the evolution of MPT to PMPT which incorporates: utility maximization and stochastic dominance
    corecore