80,366 research outputs found
There are no universal rules for induction
In a material theory of induction, inductive inferences are warranted by facts that prevail locally. This approach, it is urged, is preferable to formal theories of induction in which the good inductive inferences are delineated as those conforming to universal schemas. An inductive inference problem concerning indeterministic, nonprobabilistic systems in physics is posed, and it is argued that Bayesians cannot responsibly analyze it, thereby demonstrating that the probability calculus is not the universal logic of induction. Copyright 2010 by the Philosophy of Science Association.All right reserved
Does informal logic have anything to learn from fuzzy logic?
Probability theory is the arithmetic of the real line constrained by special aleatory axioms. Fuzzy logic is also a kind of probability theory, but of considerably more mathematical and axiomatic complexity than the standard account. Fuzzy logic purp orts to model the human capacity for reasoning with inexact concepts. It does this by exploring the assumption that when we argue in inexact terms and draw inferences in imprecise vocabularies, we actually make computations about the embedded imprecision s. I argue that this is in fact the last thing that we do, and indeed that we do the opposite
New normative standards of conditional reasoning and the dual-source model
There has been a major shift in research on human reasoning toward Bayesian and probabilistic approaches, which has been called a new paradigm. The new paradigm sees most everyday and scientific reasoning as taking place in a context of uncertainty, and inference is from uncertain beliefs and not from arbitrary assumptions. In this manuscript we present an empirical test of normative standards in the new paradigm using a novel probabilized conditional reasoning task. Our results indicated that for everyday conditional with at least a weak causal connection between antecedent and consequent only the conditional probability of the consequent given antecedent contributes unique variance to predicting the probability of conditional, but not the probability of the conjunction, nor the probability of the material conditional. Regarding normative accounts of reasoning, we found significant evidence that participants' responses were confidence preserving (i.e., p-valid in the sense of Adams, 1998) for MP inferences, but not for MT inferences. Additionally, only for MP inferences and to a lesser degree for DA inferences did the rate of responses inside the coherence intervals defined by mental probability logic (Pfeifer and Kleiter, 2005, 2010) exceed chance levels. In contrast to the normative accounts, the dual-source model (Klauer et al., 2010) is a descriptive model. It posits that participants integrate their background knowledge (i.e., the type of information primary to the normative approaches) and their subjective probability that a conclusion is seen as warranted based on its logical form. Model fits showed that the dual-source model, which employed participants' responses to a deductive task with abstract contents to estimate the form-based component, provided as good an account of the data as a model that solely used data from the probabilized conditional reasoning task
Explanations as Programs in Probabilistic Logic Programming
The generation of comprehensible explanations is an essential feature of
modern artificial intelligence systems. In this work, we consider probabilistic
logic programming, an extension of logic programming which can be useful to
model domains with relational structure and uncertainty. Essentially, a program
specifies a probability distribution over possible worlds (i.e., sets of
facts). The notion of explanation is typically associated with that of a world,
so that one often looks for the most probable world as well as for the worlds
where the query is true. Unfortunately, such explanations exhibit no causal
structure. In particular, the chain of inferences required for a specific
prediction (represented by a query) is not shown. In this paper, we propose a
novel approach where explanations are represented as programs that are
generated from a given query by a number of unfolding-like transformations.
Here, the chain of inferences that proves a given query is made explicit.
Furthermore, the generated explanations are minimal (i.e., contain no
irrelevant information) and can be parameterized w.r.t. a specification of
visible predicates, so that the user may hide uninteresting details from
explanations.Comment: Published as: Vidal, G. (2022). Explanations as Programs in
Probabilistic Logic Programming. In: Hanus, M., Igarashi, A. (eds) Functional
and Logic Programming. FLOPS 2022. Lecture Notes in Computer Science, vol
13215. Springer, Cham. The final authenticated publication is available
online at https://doi.org/10.1007/978-3-030-99461-7_1
- …