3,328 research outputs found

    Updating beliefs with incomplete observations

    Get PDF
    Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete. This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Grunwald and Halpern have shown that commonly used updating strategies fail in this case, except under very special assumptions. In this paper we propose a new method for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no assumptions about the so-called incompleteness mechanism that associates complete with incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we use only coherence arguments to turn prior into posterior probabilities. In general, this new approach to updating produces lower and upper posterior probabilities and expectations, as well as partially determinate decisions. This is a logical consequence of the existing ignorance about the incompleteness mechanism. We apply the new approach to the problem of classification of new evidence in probabilistic expert systems, where it leads to a new, so-called conservative updating rule. In the special case of Bayesian networks constructed using expert knowledge, we provide an exact algorithm for classification based on our updating rule, which has linear-time complexity for a class of networks wider than polytrees. This result is then extended to the more general framework of credal networks, where computations are often much harder than with Bayesian nets. Using an example, we show that our rule appears to provide a solid basis for reliable updating with incomplete observations, when no strong assumptions about the incompleteness mechanism are justified.Comment: Replaced with extended versio

    A proposed framework for characterising uncertainty and variability in rock mechanics and rock engineering

    Get PDF
    This thesis develops a novel understanding of the fundamental issues in characterising and propagating unpredictability in rock engineering design. This unpredictability stems from the inherent complexity and heterogeneity of fractured rock masses as engineering media. It establishes the importance of: a) recognising that unpredictability results from epistemic uncertainty (i.e. resulting from a lack of knowledge) and aleatory variability (i.e. due to inherent randomness), and; b) the means by which uncertainty and variability associated with the parameters that characterise fractured rock masses are propagated through the modelling and design process. Through a critical review of the literature, this thesis shows that in geotechnical engineering ā€“ rock mechanics and rock engineering in particular ā€“ there is a lack of recognition in the existence of epistemic uncertainty and aleatory variability, and hence inappropriate design methods are often used. To overcome this, a novel taxonomy is developed and presented that facilitates characterisation of epistemic uncertainty and aleatory variability in the context of rock mechanics and rock engineering. Using this taxonomy, a new framework is developed that gives a protocol for correctly propagating uncertainty and variability through engineering calculations. The effectiveness of the taxonomy and the framework are demonstrated through their application to simple challenge problems commonly found in rock engineering. This new taxonomy and framework will provide engineers engaged in preparing rock engineering designs an objective means of characterising unpredictability in parameters commonly used to define properties of fractured rock masses. These new tools will also provide engineers with a means of clearly understanding the true nature of unpredictability inherent in rock mechanics and rock engineering, and thus direct selection of an appropriate unpredictability model to propagate unpredictability faithfully through engineering calculations. Thus, the taxonomy and framework developed in this thesis provide practical tools to improve the safety of rock engineering designs through an improved understanding of the unpredictability concepts.Open Acces

    Bayesian Cognitive Science, Monopoly, and Neglected Frameworks

    Get PDF
    A widely shared view in the cognitive sciences is that discovering and assessing explanations of cognitive phenomena whose production involves uncertainty should be done in a Bayesian framework. One assumption supporting this modelling choice is that Bayes provides the best approach for representing uncertainty. However, it is unclear that Bayes possesses special epistemic virtues over alternative modelling frameworks, since a systematic comparison has yet to be attempted. Currently, it is then premature to assert that cognitive phenomena involving uncertainty are best explained within the Bayesian framework. As a forewarning, progress in cognitive science may be hindered if too many scientists continue to focus their efforts on Bayesian modelling, which risks to monopolize scientific resources that may be better allocated to alternative approaches

    Being Realist about Bayes, and the Predictive Processing Theory of Mind

    Get PDF
    Some naturalistic philosophers of mind subscribing to the predictive processing theory of mind have adopted a realist attitude towards the results of Bayesian cognitive science. In this paper, we argue that this realist attitude is unwarranted. The Bayesian research program in cognitive science does not possess special epistemic virtues over alternative approaches for explaining mental phenomena involving uncertainty. In particular, the Bayesian approach is not simpler, more unifying, or more rational than alternatives. It is also contentious that the Bayesian approach is overall better supported by the empirical evidence. So, to develop philosophical theories of mind on the basis of a realist interpretation of results from Bayesian cognitive science is unwarranted. Naturalistic philosophers of mind should instead adopt an anti-realist attitude towards these results and remain agnostic as to whether Bayesian models are true. For continuing on with an exclusive focus and praise of Bayes within debates about the predictive processing theory will impede progress in philosophical understanding of scientific practice in computational cognitive science as well as of the architecture of the mind

    Clouds, p-boxes, fuzzy sets, and other uncertainty representations in higher dimensions

    Get PDF
    Uncertainty modeling in real-life applications comprises some serious problems such as the curse of dimensionality and a lack of sufficient amount of statistical data. In this paper we give a survey of methods for uncertainty handling and elaborate the latest progress towards real-life applications with respect to the problems that come with it. We compare different methods and highlight their relationships. We introduce intuitively the concept of potential clouds, our latest approach which successfully copes with both higher dimensions and incomplete information

    Scientiļ¬c uncertainty and decision making

    Get PDF
    It is important to have an adequate model of uncertainty, since decisions must be made before the uncertainty can be resolved. For instance, ļ¬‚ood defenses must be designed before we know the future distribution of ļ¬‚ood events. It is standardly assumed that probability theory oļ¬€ers the best model of uncertain information. I think there are reasons to be sceptical of this claim. I criticise some arguments for the claim that probability theory is the only adequate model of uncertainty. In particular I critique Dutch book arguments, representation theorems, and accuracy based arguments. Then I put forward my preferred model: imprecise probabilities. These are sets of probability measures. I oļ¬€er several motivations for this model of uncertain belief, and suggest a number of interpretations of the framework. I also defend the model against some criticisms, including the so-called problem of dilation. I apply this framework to decision problems in the abstract. I discuss some decision rules from the literature including Leviā€™s E-admissibility and the more permissive rule favoured by Walley, among others. I then point towards some applications to climate decisions. My conclusions are largely negative: decision making under such severe uncertainty is inevitably diļ¬ƒcult. I ļ¬nish with a case study of scientiļ¬c uncertainty. Climate modellers attempt to oļ¬€er probabilistic forecasts of future climate change. There is reason to be sceptical that the model probabilities oļ¬€ered really do reļ¬‚ect the chances of future climate change, at least at regional scales and long lead times. Indeed, scientiļ¬c uncertainty is multi-dimensional, and diļ¬ƒcult to quantify. I argue that probability theory is not an adequate representation of the kinds of severe uncertainty that arise in some areas in science. I claim that this requires that we look for a better framework for modelling uncertaint

    Adaptive User Interfaces for Intelligent E-Learning: Issues and Trends

    Get PDF
    Adaptive User Interfaces have a long history rooted in the emergence of such eminent technologies as Artificial Intelligence, Soft Computing, Graphical User Interface, JAVA, Internet, and Mobile Services. More specifically, the advent and advancement of the Web and Mobile Learning Services has brought forward adaptivity as an immensely important issue for both efficacy and acceptability of such services. The success of such a learning process depends on the intelligent context-oriented presentation of the domain knowledge and its adaptivity in terms of complexity and granularity consistent to the learnerā€™s cognitive level/progress. Researchers have always deemed adaptive user interfaces as a promising solution in this regard. However, the richness in the human behavior, technological opportunities, and contextual nature of information offers daunting challenges. These require creativity, cross-domain synergy, cross-cultural and cross-demographic understanding, and an adequate representation of mission and conception of the task. This paper provides a review of state-of-the-art in adaptive user interface research in Intelligent Multimedia Educational Systems and related areas with an emphasis on core issues and future directions

    Treatment of imprecision in data repositories with the aid of KNOLAP

    Get PDF
    Traditional data repositories introduced for the needs of business processing, typically focus on the storage and querying of crisp domains of data. As a result, current commercial data repositories have no facilities for either storing or querying imprecise/ approximate data. No significant attempt has been made for a generic and applicationindependent representation of value imprecision mainly as a property of axes of analysis and also as part of dynamic environment, where potential users may wish to define their ā€œownā€ axes of analysis for querying either precise or imprecise facts. In such cases, measured values and facts are characterised by descriptive values drawn from a number of dimensions, whereas values of a dimension are organised as hierarchical levels. A solution named H-IFS is presented that allows the representation of flexible hierarchies as part of the dimension structures. An extended multidimensional model named IF-Cube is put forward, which allows the representation of imprecision in facts and dimensions and answering of queries based on imprecise hierarchical preferences. Based on the H-IFS and IF-Cube concepts, a post relational OLAP environment is delivered, the implementation of which is DBMS independent and its performance solely dependent on the underlying DBMS engine
    • ā€¦
    corecore