5,749 research outputs found

    A Labelling Framework for Probabilistic Argumentation

    Full text link
    The combination of argumentation and probability paves the way to new accounts of qualitative and quantitative uncertainty, thereby offering new theoretical and applicative opportunities. Due to a variety of interests, probabilistic argumentation is approached in the literature with different frameworks, pertaining to structured and abstract argumentation, and with respect to diverse types of uncertainty, in particular the uncertainty on the credibility of the premises, the uncertainty about which arguments to consider, and the uncertainty on the acceptance status of arguments or statements. Towards a general framework for probabilistic argumentation, we investigate a labelling-oriented framework encompassing a basic setting for rule-based argumentation and its (semi-) abstract account, along with diverse types of uncertainty. Our framework provides a systematic treatment of various kinds of uncertainty and of their relationships and allows us to back or question assertions from the literature

    The Bayesian sampler : generic Bayesian inference causes incoherence in human probability

    Get PDF
    Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample

    Estimation Evidence

    Get PDF

    To P or not to P: on the evidential nature of P-values and their place in scientific inference

    Full text link
    The customary use of P-values in scientific research has been attacked as being ill-conceived, and the utility of P-values has been derided. This paper reviews common misconceptions about P-values and their alleged deficits as indices of experimental evidence and, using an empirical exploration of the properties of P-values, documents the intimate relationship between P-values and likelihood functions. It is shown that P-values quantify experimental evidence not by their numerical value, but through the likelihood functions that they index. Many arguments against the utility of P-values are refuted and the conclusion is drawn that P-values are useful indices of experimental evidence. The widespread use of P-values in scientific research is well justified by the actual properties of P-values, but those properties need to be more widely understood.Comment: 31 pages, 9 figures and R cod

    A Bayesian Probabilistic Argumentation Framework for Learning from Online Reviews

    Get PDF
    In the real world it is common for agents to posit arguments concerning an issue but not directly specify the attack relations between them. Nonetheless the agent may have these attacks in mind and instead they may provide a proxy indicator through which one can infer the agent's intended argument graph (arguments and attacks). Consider online reviews, where reviews are collections of arguments for and against the product (positive and negative) under review and the rating indicates whether the positive or negative arguments succeed ultimately. In previous work [1] we have proposed a method that formalises this intuition and uses the constellations approach to probabilistic argumentation to construct a probability distribution over the set of arguments graphs the agent may have had in mind. In this paper we extend this proposal and provide a method, that uses Bayesian inference, to update the initial probability distribution using real data. We evaluate our proposal by conducting a number of simulations using synthetic data
    corecore