1,201,526 research outputs found
A Model of Minimal Probabilistic Belief Revision
A probabilistic belief revision function assigns to every initial probabilistic belief and every observable event some revised probabilistic belief that only attaches positive probability to states in this event. We propose three axioms for belief revision functions: (1) linearity, meaning that if the decision maker observes that the true state is in {a,b}, and hence state c is impossible, then the proportions of c''s initial probability that are shifted to a and b, respectively, should be independent of c''s initial probability; (2) transitivity, stating that if the decision maker deems belief β equally similar to states a and b, and deems β equally similar to states b and c, then he should deem β equally similar to states a and c; (3) information-order independence, stating that the way in which information is received should not matter for the eventual revised belief. We show that a belief revision function satisfies the three axioms above if and only if there is some linear one-to-one function ϕ, transforming the belief simplex into a polytope that is closed under orthogonal projections, such that the belief revision function satisfies minimal belief revision with respect to ϕ. By the latter, we mean that the decision maker, when having initial belief β₁ and observing the event E, always chooses the revised belief β₂ that attaches positive probability only to states in E and for which ϕ(β₂) has minimal Euclidean distance to ϕ(β₁).microeconomics ;
Comparing stochastic design decision belief models : pointwise versus interval probabilities.
Decision support systems can either directly support a product designer or support an agent operating within a multi-agent system (MAS). Stochastic based decision support systems require an underlying belief model that encodes domain knowledge. The underlying supporting belief model has traditionally been a probability distribution function (PDF) which uses pointwise probabilities for all possible outcomes. This can present a challenge during the knowledge elicitation process. To overcome this, it is proposed to test the performance of a credal set belief model. Credal sets (sometimes also referred to as p-boxes) use interval probabilities rather than pointwise probabilities and therefore are more easier to elicit from domain experts. The PDF and credal set belief models are compared using a design domain MAS which is able to learn, and thereby refine, the belief model based on its experience. The outcome of the experiment illustrates that there is no significant difference between the PDF based and credal set based belief models in the performance of the MAS
Belief Propagation Min-Sum Algorithm for Generalized Min-Cost Network Flow
Belief Propagation algorithms are instruments used broadly to solve graphical
model optimization and statistical inference problems. In the general case of a
loopy Graphical Model, Belief Propagation is a heuristic which is quite
successful in practice, even though its empirical success, typically, lacks
theoretical guarantees. This paper extends the short list of special cases
where correctness and/or convergence of a Belief Propagation algorithm is
proven. We generalize formulation of Min-Sum Network Flow problem by relaxing
the flow conservation (balance) constraints and then proving that the Belief
Propagation algorithm converges to the exact result
Second-Order Belief Hidden Markov Models
Hidden Markov Models (HMMs) are learning methods for pattern recognition. The
probabilistic HMMs have been one of the most used techniques based on the
Bayesian model. First-order probabilistic HMMs were adapted to the theory of
belief functions such that Bayesian probabilities were replaced with mass
functions. In this paper, we present a second-order Hidden Markov Model using
belief functions. Previous works in belief HMMs have been focused on the
first-order HMMs. We extend them to the second-order model
- …