16,353 research outputs found
Recommended from our members
A quantum theoretical explanation for probability judgment errors
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction, disjunction, inverse, and conditional fallacies, as well as unpacking effects and partitioning effects. Quantum probability theory is a general and coherent theory based on a set of (von Neumann) axioms which relax some of the constraints underlying classic (Kolmogorov) probability theory. The quantum model is compared and contrasted with other competing explanations for these judgment errors including the representativeness heuristic, the averaging model, and a memory retrieval model for probability judgments. The quantum model also provides ways to extend Bayesian, fuzzy set, and fuzzy trace theories. We conclude that quantum information processing principles provide a viable and promising new way to understand human judgment and reasoning
Raising argument strength using negative evidence: A constraint on models of induction
Both intuitively, and according to similarity-based theories of induction, relevant evidence raises argument strength when it is positive and lowers it when it is negative. In three experiments, we tested the hypothesis that argument strength can actually increase when negative evidence is introduced. Two kinds of argument were compared through forced choice or sequential evaluation: single positive arguments (e.g., “Shostakovich’s music causes alpha waves in the brain; therefore, Bach’s music causes alpha waves in the brain”) and double mixed arguments (e.g., “Shostakovich’s music causes alpha waves in the brain, X’s music DOES NOT; therefore, Bach’s music causes alpha waves in the brain”). Negative evidence in the second premise lowered credence when it applied to an item X from the same subcategory (e.g., Haydn) and raised it when it applied to a different subcategory (e.g., AC/DC). The results constitute a new constraint on models of induction
The objective Bayesian conceptualisation of proof and reference class problems
The objective Bayesian view of proof (or logical probability, or
evidential support) is explained and defended: that the relation of
evidence to hypothesis (in legal trials, science etc) is a strictly
logical one, comparable to deductive logic. This view is
distinguished from the thesis, which had some popularity in law in
the 1980s, that legal evidence ought to be evaluated using
numerical probabilities and formulas. While numbers are not
always useful, a central role is played in uncertain reasoning by the
‘proportional syllogism’, or argument from frequencies, such as
‘nearly all aeroplane flights arrive safely, so my flight is very
likely to arrive safely’. Such arguments raise the ‘problem of the
reference class’, arising from the fact that an individual case may
be a member of many different classes in which frequencies differ.
For example, if 15 per cent of swans are black and 60 per cent of
fauna in the zoo is black, what should I think about the likelihood
of a swan in the zoo being black? The nature of the problem is
explained, and legal cases where it arises are given. It is explained
how recent work in data mining on the relevance of features for
prediction provides a solution to the reference class problem
The Bayesian sampler : generic Bayesian inference causes incoherence in human probability
Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample
Learning in a changing environment
Multiple cue probability learning studies have typically focused on stationary environments. We present three experiments investigating learning in changing
environments. A fine-grained analysis of the learning dynamics shows that participants were responsive to both abrupt and gradual changes in cue-outcome relations. We found no evidence that participants adapted to these types of change in qualitatively different ways. Also, in contrast to earlier claims that these tasks are learned implicitly, participants showed good insight into what
they learned. By fitting formal learning models, we investigated whether participants learned global functional relationships or made localized predictions from
similar experienced exemplars. Both a local (the Associative Learning Model) and a global learning model (the novel Bayesian Linear Filter) fitted the data
of the first two experiments. However, the results of Experiment 3, which was specifically designed to discriminate between local and global learning models,
provided more support for global learning models. Finally, we present a novel model to account for the cue competition effects found in previous research and displayed by some of our participants
A Metacognitive Approach to Trust and a Case Study: Artificial Agency
Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust A, H demands A to be self-assertive about confidence and able to self-correct its own models. In the Bayesian approach trust can be applied not only to humans, but to artificial agents (e.g. Machine Learning algorithms). We explain the advantage the metacognitive trust when compared to mainstream approaches and how it relates to virtue epistemology. The metacognitive ethics of trust is swiftly discussed
Individual differences in causal learning and decision making
This is an accepted author manuscript of an article subsequently published by Elsevier. The final published version can be found here: http://dx.doi.org/10.1016/j.actpsy.2005.04.003In judgment and decision making tasks, people tend to neglect the overall frequency of base-rates when they estimate the probability of an event; this is known as the base-rate fallacy. In causal learning, despite people s accuracy at judging causal strength according to one or other normative model (i.e., Power PC, DP), they tend to misperceive base-rate information (e.g., the cause density effect). The present study investigates the relationship between causal learning and decision making by asking whether people weight base-rate information in the same way when estimating causal strength and when making judgments or inferences about the likelihood of an event. The results suggest that people differ according to the weight they place on base-rate information, but the way individuals do this is consistent across causal and decision making tasks. We interpret the results as reflecting a tendency to differentially weight base-rate information which generalizes to a variety of tasks. Additionally, this study provides evidence that causal learning and decision making share some component processes
- …