756 research outputs found

    Extend transferable belief models with probabilistic priors

    Full text link
    In this paper, we extend Smets' transferable belief model (TBM) with probabilistic priors. Our first motivation for the extension is about evidential reasoning when the underlying prior knowledge base is Bayesian. We extend standard Dempster models with prior probabilities to represent beliefs and distinguish between two types of induced mass functions on an extended Dempster model: one for believing and the other essentially for decision-making. There is a natural correspondence between these two mass functions. In the extended model, we propose two conditioning rules for evidential reasoning with probabilistic knowledge base. Our second motivation is about the partial dissociation of betting at the pignistic level from believing at the credal level in TBM. In our extended TBM, we coordinate these two levels by employing the extended Dempster model to represent beliefs at the credal level. Pignistic probabilities are derived not from the induced mass function for believing but from the one for decision-making in the model and hence need not rely on the choice of frame of discernment. Moreover, we show that the above two proposed conditionings and marginalization (or coarsening) are consistent with pignistic transformation in the extended TBM

    A survey on Bayesian nonparametric learning

    Full text link
    © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. Bayesian (machine) learning has been playing a significant role in machine learning for a long time due to its particular ability to embrace uncertainty, encode prior knowledge, and endow interpretability. On the back of Bayesian learning's great success, Bayesian nonparametric learning (BNL) has emerged as a force for further advances in this field due to its greater modelling flexibility and representation power. Instead of playing with the fixed-dimensional probabilistic distributions of Bayesian learning, BNL creates a new “game” with infinite-dimensional stochastic processes. BNL has long been recognised as a research subject in statistics, and, to date, several state-of-the-art pilot studies have demonstrated that BNL has a great deal of potential to solve real-world machine-learning tasks. However, despite these promising results, BNL has not created a huge wave in the machine-learning community. Esotericism may account for this. The books and surveys on BNL written by statisticians are overcomplicated and filled with tedious theories and proofs. Each is certainly meaningful but may scare away new researchers, especially those with computer science backgrounds. Hence, the aim of this article is to provide a plain-spoken, yet comprehensive, theoretical survey of BNL in terms that researchers in the machine-learning community can understand. It is hoped this survey will serve as a starting point for understanding and exploiting the benefits of BNL in our current scholarly endeavours. To achieve this goal, we have collated the extant studies in this field and aligned them with the steps of a standard BNL procedure-from selecting the appropriate stochastic processes through manipulation to executing the model inference algorithms. At each step, past efforts have been thoroughly summarised and discussed. In addition, we have reviewed the common methods for implementing BNL in various machine-learning tasks along with its diverse applications in the real world as examples to motivate future studies

    Representing archaeological uncertainty in cultural informatics

    Get PDF
    This thesis sets out to explore, describe, quantify, and visualise uncertainty in a cultural informatics context, with a focus on archaeological reconstructions. For quite some time, archaeologists and heritage experts have been criticising the often toorealistic appearance of three-dimensional reconstructions. They have been highlighting one of the unique features of archaeology: the information we have on our heritage will always be incomplete. This incompleteness should be reflected in digitised reconstructions of the past. This criticism is the driving force behind this thesis. The research examines archaeological theory and inferential process and provides insight into computer visualisation. It describes how these two areas, of archaeology and computer graphics, have formed a useful, but often tumultuous, relationship through the years. By examining the uncertainty background of disciplines such as GIS, medicine, and law, the thesis postulates that archaeological visualisation, in order to mature, must move towards archaeological knowledge visualisation. Three sequential areas are proposed through this thesis for the initial exploration of archaeological uncertainty: identification, quantification and modelling. The main contributions of the thesis lie in those three areas. Firstly, through the innovative design, distribution, and analysis of a questionnaire, the thesis identifies the importance of uncertainty in archaeological interpretation and discovers potential preferences among different evidence types. Secondly, the thesis uniquely analyses and evaluates, in relation to archaeological uncertainty, three different belief quantification models. The varying ways that these mathematical models work, are also evaluated through simulated experiments. Comparison of results indicates significant convergence between the models. Thirdly, a novel approach to archaeological uncertainty and evidence conflict visualisation is presented, influenced by information visualisation schemes. Lastly, suggestions for future semantic extensions to this research are presented through the design and development of new plugins to a search engine

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    On various ways of tackling incomplete information in statistics

    Get PDF
    International audienceThis short paper discusses the contributions made to the featured section on Low Quality Data. We further refine the distinction between the ontic and epistemic views of imprecise data in statistics. We also question the extent to which likelihood functions can be viewed as belief functions. Finally we comment on the data disambiguation effect of learning methods, relating it to data reconciliation problems

    Explainable and Interpretable Decision-Making for Robotic Tasks

    Get PDF
    Future generations of robots, such as service robots that support humans with household tasks, will be a pervasive part of our daily lives. The human\u27s ability to understand the decision-making process of robots is thereby considered to be crucial for establishing trust-based and efficient interactions between humans and robots. In this thesis, we present several interpretable and explainable decision-making methods that aim to improve the human\u27s understanding of a robot\u27s actions, with a particular focus on the explanation of why robot failures were committed.In this thesis, we consider different types of failures, such as task recognition errors and task execution failures. Our first goal is an interpretable approach to learning from human demonstrations (LfD), which is essential for robots to learn new tasks without the time-consuming trial-and-error learning process. Our proposed method deals with the challenge of transferring human demonstrations to robots by an automated generation of symbolic planning operators based on interpretable decision trees. Our second goal is the prediction, explanation, and prevention of robot task execution failures based on causal models of the environment. Our contribution towards the second goal is a causal-based method that finds contrastive explanations for robot execution failures, which enables robots to predict, explain and prevent even timely shifted action failures (e.g., the current action was successful but will negatively affect the success of future actions). Since learning causal models is data-intensive, our final goal is to improve the data efficiency by utilizing prior experience. This investigation aims to help robots learn causal models faster, enabling them to provide failure explanations at the cost of fewer action execution experiments.In the future, we will work on scaling up the presented methods to generalize to more complex, human-centered applications

    On conditional belief functions in directed graphical models in the Dempster-Shafer theory

    Get PDF
    The primary goal is to define conditional belief functions in the Dempster-Shafer theory. We do so similarly to probability theory's notion of conditional probability tables. Conditional belief functions are necessary for constructing directed graphical belief function models in the same sense as conditional probability tables are necessary for constructing Bayesian networks. We provide examples of conditional belief functions, including those obtained by Smets' conditional embedding. Besides defining conditional belief functions, we state and prove a few basic properties of conditionals. In the belief-function literature, conditionals are defined starting from a joint belief function. Conditionals are then defined using the removal operator, an inverse of Dempster's combination operator. When such conditionals are well-defined belief functions, we show that our definition is equivalent to these definitions
    corecore