340 research outputs found

    Toward a probability theory for product logic: states, integral representation and reasoning

    Full text link
    The aim of this paper is to extend probability theory from the classical to the product t-norm fuzzy logic setting. More precisely, we axiomatize a generalized notion of finitely additive probability for product logic formulas, called state, and show that every state is the Lebesgue integral with respect to a unique regular Borel probability measure. Furthermore, the relation between states and measures is shown to be one-one. In addition, we study geometrical properties of the convex set of states and show that extremal states, i.e., the extremal points of the state space, are the same as the truth-value assignments of the logic. Finally, we axiomatize a two-tiered modal logic for probabilistic reasoning on product logic events and prove soundness and completeness with respect to probabilistic spaces, where the algebra is a free product algebra and the measure is a state in the above sense.Comment: 27 pages, 1 figur

    Generic substitutions

    Full text link
    Up to equivalence, a substitution in propositional logic is an endomorphism of its free algebra. On the dual space, this results in a continuous function, and whenever the space carries a natural measure one may ask about the stochastic properties of the action. In classical logic there is a strong dichotomy: while over finitely many propositional variables everything is trivial, the study of the continuous transformations of the Cantor space is the subject of an extensive literature, and is far from being a completed task. In many-valued logic this dichotomy disappears: already in the finite-variable case many interesting phenomena occur, and the present paper aims at displaying some of these.Comment: 22 pages, 2 figures. Revised version according to the referee's suggestions. To appear in the J. of Symbolic Logi

    Belief functions on MV-algebras of fuzzy sets: An overview

    Get PDF
    Belief functions are the measure theoretical objects Dempster-Shafer evidence theory is based on. They are in fact totally monotone capacities, and can be regarded as a special class of measures of uncertainty used to model an agent's degrees of belief in the occurrence of a set of events by taking into account different bodies of evidence that support those beliefs. In this chapter we present two main approaches to extending belief functions on Boolean algebras of events to MV-algebras of events, modelled as fuzzy sets, and we discuss several properties of these generalized measures. In particular we deal with the normalization and soft-normalization problems, and on a generalization of Dempster's rule of combination. © 2014 Springer International Publishing Switzerland.The authors also acknowledge partial support by the FP7-PEOPLE-2009-IRSES project MaToMUVI (PIRSES-GA-2009- 247584). Also, Flaminio acknowledges partial support of the Italian project FIRB 2010 (RBFR10DGUA-002), Kroupa has been supported by the grant GACR 13-20012S, and Godo acknowledges partial support of the Spanish projects EdeTRI (TIN2012-39348-C02-01) and Agreement Technologies (CONSOLIDER CSD2007-0022, INGENIO 2010).Peer Reviewe

    A Recipe for State-and-Effect Triangles

    Full text link
    In the semantics of programming languages one can view programs as state transformers, or as predicate transformers. Recently the author has introduced state-and-effect triangles which capture this situation categorically, involving an adjunction between state- and predicate-transformers. The current paper exploits a classical result in category theory, part of Jon Beck's monadicity theorem, to systematically construct such a state-and-effect triangle from an adjunction. The power of this construction is illustrated in many examples, covering many monads occurring in program semantics, including (probabilistic) power domains

    On the commutation of generalized means on probability spaces

    Get PDF
    Let ff and gg be real-valued continuous injections defined on a non-empty real interval II, and let (X,L,λ)(X, \mathscr{L}, \lambda) and (Y,M,μ)(Y, \mathscr{M}, \mu) be probability spaces in each of which there is at least one measurable set whose measure is strictly between 00 and 11. We say that (f,g)(f,g) is a (λ,μ)(\lambda, \mu)-switch if, for every LM\mathscr{L} \otimes \mathscr{M}-measurable function h:X×YRh: X \times Y \to \mathbf{R} for which h[X×Y]h[X\times Y] is contained in a compact subset of II, it holds f1 ⁣(Xf ⁣(g1 ⁣(Ygh  dμ))dλ) ⁣=g1 ⁣(Yg ⁣(f1 ⁣(Xfh  dλ))dμ) ⁣, f^{-1}\!\left(\int_X f\!\left(g^{-1}\!\left(\int_Y g \circ h\;d\mu\right)\right)d \lambda\right)\! = g^{-1}\!\left(\int_Y g\!\left(f^{-1}\!\left(\int_X f \circ h\;d\lambda\right)\right)d \mu\right)\!, where f1f^{-1} is the inverse of the corestriction of ff to f[I]f[I], and similarly for g1g^{-1}. We prove that this notion is well-defined, by establishing that the above functional equation is well-posed (the equation can be interpreted as a permutation of generalized means and raised as a problem in the theory of decision making under uncertainty), and show that (f,g)(f,g) is a (λ,μ)(\lambda, \mu)-switch if and only if f=ag+bf = ag + b for some a,bRa,b \in \mathbf R, a0a \ne 0.Comment: 9 pages, no figures. Fixed minor details. Final version to appear in Indagationes Mathematica

    On the correctness of monadic backward induction

    Get PDF
    In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman\u27s backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. \u27s generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material

    Utilitarianism with and without expected utility

    Get PDF
    We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are also consistent with the rejection of all of the expected utility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expected utility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expected utility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expected utility’ condition popular in non-expected utility theory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expected utility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity

    Noisy Stochastic Games

    Get PDF
    This paper establishes existence of a stationary Markov perfect equilibrium in general stochastic games with noise a component of the state that is nonatomically distributed and not directly affected by the previous periods state and actions. Noise may be simply a payoff irrelevant public randomization device, delivering known results on existence of correlated equilibrium as a special case. More generally, noise can take the form of shocks that enter into players stage payoffs and the transition probability on states. The existence result is applied to a model of industry dynamics and to a model of dynamic partisan electoral competition.

    Reasoning with random sets: An agenda for the future

    Full text link
    In this paper, we discuss a potential agenda for future work in the theory of random sets and belief functions, touching upon a number of focal issues: the development of a fully-fledged theory of statistical reasoning with random sets, including the generalisation of logistic regression and of the classical laws of probability; the further development of the geometric approach to uncertainty, to include general random sets, a wider range of uncertainty measures and alternative geometric representations; the application of this new theory to high-impact areas such as climate change, machine learning and statistical learning theory.Comment: 94 pages, 17 figure
    corecore