2,051 research outputs found

    Probabilistic entailment in the setting of coherence: The role of quasi conjunction and inclusion relation

    Full text link
    In this paper, by adopting a coherence-based probabilistic approach to default reasoning, we focus the study on the logical operation of quasi conjunction and the Goodman-Nguyen inclusion relation for conditional events. We recall that quasi conjunction is a basic notion for defining consistency of conditional knowledge bases. By deepening some results given in a previous paper we show that, given any finite family of conditional events F and any nonempty subset S of F, the family F p-entails the quasi conjunction C(S); then, given any conditional event E|H, we analyze the equivalence between p-entailment of E|H from F and p-entailment of E|H from C(S), where S is some nonempty subset of F. We also illustrate some alternative theorems related with p-consistency and p-entailment. Finally, we deepen the study of the connections between the notions of p-entailment and inclusion relation by introducing for a pair (F,E|H) the (possibly empty) class K of the subsets S of F such that C(S) implies E|H. We show that the class K satisfies many properties; in particular K is additive and has a greatest element which can be determined by applying a suitable algorithm

    Coherent Risk Measures and Upper Previsions

    Get PDF
    In this paper coherent risk measures and other currently used risk measures, notably Value-at-Risk (VaR), are studied from the perspective of the theory of coherent imprecise previsions. We introduce the notion of coherent risk measure defined on an arbitrary set of risks, showing that it can be considered a special case of coherent upper prevision. We also prove that our definition generalizes the notion of coherence for risk measures defined on a linear space of random numbers, given in literature. We also show that Value-at-Risk does not necessarily satisfy a weaker notion of coherence called ‘avoiding sure loss’ (ASL), and discuss both sufficient conditions for VaR to avoid sure loss and ways of modifying VaR into a coherent risk measure.Coherent risk measure, imprecise prevision, Value-at-Risk, avoiding sure loss condition

    Characterizing coherence, correcting incoherence

    Get PDF
    Lower previsions defined on a finite set of gambles can be looked at as points in a finite-dimensional real vector space. Within that vector space, the sets of sure loss avoiding and coherent lower previsions form convex polyhedra. We present procedures for obtaining characterizations of these polyhedra in terms of a minimal, finite number of linear constraints. As compared to the previously known procedure, these procedures are more efficient and much more straightforward. Next, we take a look at a procedure for correcting incoherent lower previsions based on pointwise dominance. This procedure can be formulated as a multi-objective linear program, and the availability of the finite characterizations provide an avenue for making these programs computationally feasible

    Towards an epistemic theory of probability.

    Get PDF
    The main concern of this thesis is to develop an epistemic conception of probability. In chapter one we look at Ramsey's work. In addition to his claim that the axioms of probability ace laws of consistency for partial beliefs, we focus attention on his view that the reasonableness of our probability statements does not consist merely in such coherence, but is to be assessed through the vindication of the habits which give rise to them. In chapter two we examine de Finetti's account, and compare it with Ramsey's. One significant point of divergence is de Finetti's claim that coherence is the only valid form of appraisal for probability statements. His arguments for this position depend heavily on the implementation of a Bayesian model for belief change; we argue that such an approach fails to give a satisfactory account of the relation between probabilities and objective facts. In chapter three we stake out the ground for oar own positive proposals - for an account which is non-objective in so far as it does not require the postulation of probabilistic facts, but non-subjective in the sense that probability statements are open to objective forms of appraisal. we suggest that a certain class of probability statements are best interpreted as recommendations of partial belief; these being measurable by the betting quotients that one judges to be fair. Moreover, we argue that these probability statements are open to three main forms of appraisal (each quantifiable through the use of proper scoring rules), namely: (i) Coherence (ii) Calibration (iii) Refinement. The latter two forms of appraisal are applicable both in an ex ante sense (relative to the information known by the forecaster) and an ex post one (relative to the results of the events forecast). In chapters four and five we consider certain problems which confront theories of partial belief; in particular, (1) difficulties surrounding the justification of the rule to maximise one's information, and (2) problems with the ascription of probabilities to mathematical propositions. Both of these issues seem resolvable; the first through the principle of maximising expected utility (SEU), and the second either by amending the axioms of probability, or by making use of the notion that probabilities are appraisable via scoring rules. There do remain, however, various difficulties with SEU, in particular with respect to its application in real-life situations. These are discussed, but no final conclusion reached, except that an epistemic theory such as ours is not undermined by the inapplicability of SEU in certain situations

    Characterizing Coherence, Correcting Incoherence

    Get PDF
    Abstract Lower previsions defined on a finite set of gambles can be looked at as points in a finite-dimensional real vector space. Within that vector space, the sets of sure loss avoiding and coherent lower previsions form convex polyhedra. We present procedures for obtaining characterizations of these polyhedra in terms of a minimal, finite number of linear constraints. As compared to the previously known procedure, these procedures are more efficient and much more straightforward. Next, we take a look at a procedure for correcting incoherent lower previsions based on pointwise dominance. This procedure can be formulated as a multi-objective linear program, and the availability of the finite characterizations provide an avenue for making these programs computationally feasible

    Approximate Bayesian Computational methods

    Full text link
    Also known as likelihood-free methods, approximate Bayesian computational (ABC) methods have appeared in the past ten years as the most satisfactory approach to untractable likelihood problems, first in genetics then in a broader spectrum of applications. However, these methods suffer to some degree from calibration difficulties that make them rather volatile in their implementation and thus render them suspicious to the users of more traditional Monte Carlo methods. In this survey, we study the various improvements and extensions made to the original ABC algorithm over the recent years.Comment: 7 figure

    Stop Making Sense: Exploring Basic Properties and Clinical Applications of Coherence

    Get PDF
    This study explored the ways in which people make sense of ambiguous tasks and the degree to which people prefer contexts where coherent responding is possible. Relational frame theory contains a foundational assumption that coherence (i.e., making sense) is reinforcing for verbally competent humans. That is, it is assumed that humans relate ambiguous stimuli in ways that go together because they have an extensive learning history where others have given praise, positive attention, and other reinforcement for this behavior. This study was designed to empirically investigate this core assumption of relational frame theory by analyzing response patterns to ambiguous stimuli and by assessing whether participants displayed a preference towards coherent contexts. Obtained findings revealed that the majority of participants responded to ambiguous stimuli in ways that were internally consistent and coherent in the absence of any programmed contingencies. Many participants also displayed a preference toward contexts where coherent responding was possible and a small subset of participants persisted in this preference even when it was increasingly costly to do so. Reports of frustration obtained throughout the preparation were moderated both by performance in study tasks and by measures of cognitive fusion and psychological inflexibility. The major theoretical contributions of these findings as well as applied implications were discussed

    Imperfect Rationality and Inflationary Inertia: A New Estimation of the Phillips Curve for Brazil

    Get PDF
    This paper presents some new estimates for the relationship between inflation and unemployment in Brazil based on a new Keynesian hypothesis about the behavior of the economy. Four main hypotheses are tested and sustained throughout the study: i) agents do not have perfect rationality; ii) the imperfection in the agents expectations generating process may be an important factor in explaining the high persistence (inertia) of Brazilian inflation; iii) inflation does have an autonomous inertial component, without linkage to shocks in individual markets; iv) a non-linear relationship between inflation and unemployment is able to provide better explanations for the inflation-unemployment relationship in the Brazilian economy in the last 12 years. While the first two hypotheses are tested using a Markov Switching based model of regime changes, the remaining two are tested in a context of a convex Phillips Curve estimated using the Kalman filter. Despite the methodological and estimation improvements provided in the paper, the impulse-response functions for the monetary policy presented the same properties shown in the literature that uses Brazilian dataPhillips Curve; Expectations; Inflation; NAIRU-gap; Markov Switching Models; Kalman Filter; SUR

    Quasi Conjunction, Quasi Disjunction, T-norms and T-conorms: Probabilistic Aspects

    Full text link
    We make a probabilistic analysis related to some inference rules which play an important role in nonmonotonic reasoning. In a coherence-based setting, we study the extensions of a probability assessment defined on nn conditional events to their quasi conjunction, and by exploiting duality, to their quasi disjunction. The lower and upper bounds coincide with some well known t-norms and t-conorms: minimum, product, Lukasiewicz, and Hamacher t-norms and their dual t-conorms. On this basis we obtain Quasi And and Quasi Or rules. These are rules for which any finite family of conditional events p-entails the associated quasi conjunction and quasi disjunction. We examine some cases of logical dependencies, and we study the relations among coherence, inclusion for conditional events, and p-entailment. We also consider the Or rule, where quasi conjunction and quasi disjunction of premises coincide with the conclusion. We analyze further aspects of quasi conjunction and quasi disjunction, by computing probabilistic bounds on premises from bounds on conclusions. Finally, we consider biconditional events, and we introduce the notion of an nn-conditional event. Then we give a probabilistic interpretation for a generalized Loop rule. In an appendix we provide explicit expressions for the Hamacher t-norm and t-conorm in the unitary hypercube
    corecore