65 research outputs found

    Transferrable Plausibility Model - A Probabilistic Interpretation of Mathematical Theory of Evidence

    Full text link
    This paper suggests a new interpretation of the Dempster-Shafer theory in terms of probabilistic interpretation of plausibility. A new rule of combination of independent evidence is shown and its preservation of interpretation is demonstrated.Comment: Pre-publication version of: M.A. K{\l}opotek: Transferable Plausibility Model - A Probabilistic Interpretation of Mathematical Theory of Evidence O.Hryniewicz, J. Kacprzyk, J.Koronacki, S.Wierzcho\'{n}: Issues in Intelligent Systems Paradigms Akademicka Oficyna Wydawnicza EXIT, Warszawa 2005 ISBN 83-87674-90-7, pp.107--11

    Reasoning From Data in the Mathematical Theory of Evidence

    Full text link
    Mathematical Theory of Evidence (MTE) is known as a foundation for reasoning when knowledge is expressed at various levels of detail. Though much research effort has been committed to this theory since its foundation, many questions remain open. One of the most important open questions seems to be the relationship between frequencies and the Mathematical Theory of Evidence. The theory is blamed to leave frequencies outside (or aside of) its framework. The seriousness of this accusation is obvious: no experiment may be run to compare the performance of MTE-based models of real world processes against real world data. In this paper we develop a frequentist model of the MTE bringing to fall the above argument against MTE. We describe, how to interpret data in terms of MTE belief functions, how to reason from data about conditional belief functions, how to generate a random sample out of a MTE model, how to derive MTE model from data and how to compare results of reasoning in MTE model and reasoning from data. It is claimed in this paper that MTE is suitable to model some types of destructive processesComment: presented as poster M.A. K{\l}opotek: Reasoning from Data in the Mathematical Theory of Evidence. [in:] Proc. Eighth International Symposium On Methodologies For Intelligent Systems (ISMIS'94), Charlotte, North Carolina, USA, October 16-19, 1994. arXiv admin note: text overlap with arXiv:1707.0388

    Evidence Against Evidence Theory (?!)

    Full text link
    This paper is concerned with the apparent greatest weakness of the Mathematical Theory of Evidence (MTE) of Shafer \cite{Shafer:76}, which has been strongly criticized by Wasserman \cite{Wasserman:92ijar} - the relationship to frequencies. Weaknesses of various proposals of probabilistic interpretation of MTE belief functions are demonstrated. A new frequency-based interpretation is presented overcoming various drawbacks of earlier interpretations.Comment: 30 pages. arXiv admin note: substantial text overlap with arXiv:1704.0400

    What Does a Belief Function Believe In ?

    Full text link
    The conditioning in the Dempster-Shafer Theory of Evidence has been defined (by Shafer \cite{Shafer:90} as combination of a belief function and of an "event" via Dempster rule. On the other hand Shafer \cite{Shafer:90} gives a "probabilistic" interpretation of a belief function (hence indirectly its derivation from a sample). Given the fact that conditional probability distribution of a sample-derived probability distribution is a probability distribution derived from a subsample (selected on the grounds of a conditioning event), the paper investigates the empirical nature of the Dempster- rule of combination. It is demonstrated that the so-called "conditional" belief function is not a belief function given an event but rather a belief function given manipulation of original empirical data.\\ Given this, an interpretation of belief function different from that of Shafer is proposed. Algorithms for construction of belief networks from data are derived for this interpretation.Comment: 13 page

    Identification and Interpretation of Belief Structure in Dempster-Shafer Theory

    Full text link
    Mathematical Theory of Evidence called also Dempster-Shafer Theory (DST) is known as a foundation for reasoning when knowledge is expressed at various levels of detail. Though much research effort has been committed to this theory since its foundation, many questions remain open. One of the most important open questions seems to be the relationship between frequencies and the Mathematical Theory of Evidence. The theory is blamed to leave frequencies outside (or aside of) its framework. The seriousness of this accusation is obvious: (1) no experiment may be run to compare the performance of DST-based models of real world processes against real world data, (2) data may not serve as foundation for construction of an appropriate belief model. In this paper we develop a frequentist interpretation of the DST bringing to fall the above argument against DST. An immediate consequence of it is the possibility to develop algorithms acquiring automatically DST belief models from data. We propose three such algorithms for various classes of belief model structures: for tree structured belief networks, for poly-tree belief networks and for general type belief networks.Comment: An internal report 199

    Approximation by filter functions

    Full text link
    In this exploratory article, we draw attention to the common formal ground among various estimators such as the belief functions of evidence theory and their relatives, approximation quality of rough set theory, and contextual probability. The unifying concept will be a general filter function composed of a basic probability and a weighting which varies according to the problem at hand. To compare the various filter functions we conclude with a simulation study with an example from the area of item response theory

    Probability as a Modal Operator

    Full text link
    This paper argues for a modal view of probability. The syntax and semantics of one particularly strong probability logic are discussed and some examples of the use of the logic are provided. We show that it is both natural and useful to think of probability as a modal operator. Contrary to popular belief in AI, a probability ranging between 0 and 1 represents a continuum between impossibility and necessity, not between simple falsity and truth. The present work provides a clear semantics for quantification into the scope of the probability operator and for higher-order probabilities. Probability logic is a language for expressing both probabilistic and logical concepts.Comment: Appears in Proceedings of the Fourth Conference on Uncertainty in Artificial Intelligence (UAI1988

    Evidential Reasoning in a Categorial Perspective: Conjunction and Disjunction of Belief Functions

    Full text link
    The categorial approach to evidential reasoning can be seen as a combination of the probability kinematics approach of Richard Jeffrey (1965) and the maximum (cross-) entropy inference approach of E. T. Jaynes (1957). As a consequence of that viewpoint, it is well known that category theory provides natural definitions for logical connectives. In particular, disjunction and conjunction are modelled by general categorial constructions known as products and coproducts. In this paper, I focus mainly on Dempster-Shafer theory of belief functions for which I introduce a category I call Dempster?s category. I prove the existence of and give explicit formulas for conjunction and disjunction in the subcategory of separable belief functions. In Dempster?s category, the new defined conjunction can be seen as the most cautious conjunction of beliefs, and thus no assumption about distinctness (of the sources) of beliefs is needed as opposed to Dempster?s rule of combination, which calls for distinctness (of the sources) of beliefs.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991

    A New Approach to Updating Beliefs

    Full text link
    We define a new notion of conditional belief, which plays the same role for Dempster-Shafer belief functions as conditional probability does for probability functions. Our definition is different from the standard definition given by Dempster, and avoids many of the well-known problems of that definition. Just as the conditional probability Pr (lB) is a probability function which is the result of conditioning on B being true, so too our conditional belief function Bel (lB) is a belief function which is the result of conditioning on B being true. We define the conditional belief as the lower envelope (that is, the inf) of a family of conditional probability functions, and provide a closed form expression for it. An alternate way of understanding our definition of conditional belief is provided by considering ideas from an earlier paper [Fagin and Halpern, 1989], where we connect belief functions with inner measures. In particular, we show here how to extend the definition of conditional probability to non measurable sets, in order to get notions of inner and outer conditional probabilities, which can be viewed as best approximations to the true conditional probability, given our lack of information. Our definition of conditional belief turns out to be an exact analogue of our definition of inner conditional probability.Comment: Appears in Proceedings of the Sixth Conference on Uncertainty in Artificial Intelligence (UAI1990

    Hybrid Probabilistic Programs: Algorithms and Complexity

    Full text link
    Hybrid Probabilistic Programs (HPPs) are logic programs that allow the programmer to explicitly encode his knowledge of the dependencies between events being described in the program. In this paper, we classify HPPs into three classes called HPP_1,HPP_2 and HPP_r,r>= 3. For these classes, we provide three types of results for HPPs. First, we develop algorithms to compute the set of all ground consequences of an HPP. Then we provide algorithms and complexity results for the problems of entailment ("Given an HPP P and a query Q as input, is Q a logical consequence of P?") and consistency ("Given an HPP P as input, is P consistent?"). Our results provide a fine characterization of when polynomial algorithms exist for the above problems, and when these problems become intractable.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999
    • …
    corecore