143,405 research outputs found

    Argumentation and data-oriented belief revision: On the two-sided nature of epistemic change

    Get PDF
    This paper aims to bring together two separate threads in the formal study of epistemic change: belief revision and argumentation theories. Belief revision describes the way in which an agent is supposed to change his own mind, while argumentation deals with persuasive strategies employed to change the mind of other agents. Belief change and argumentation are two sides (cognitive and social) of the same epistemic coin. Argumentation theories are therefore incomplete, if they cannot be grounded in belief revision models - and vice versa. Nonetheless, so far the formal treatment of belief revision widely neglected any systematic comparison with argumentation theories. Such lack of integration poses severe limitations to our understanding of epistemic change, and more comprehensive models should instead be devised. After a short critical review of the literature (cf. 1), we outline an alternative model of belief revision whose main claim is the distinction between data and beliefs (cf. 2), and we discuss in detail its expressivity with respect to argumentation (cf. 3): finally, we summarize our conclusions and future works on the interface between belief revision and argumentation (cf. 4)

    Arguments as Belief Structures: Towards a Toulmin Layout of Doxastic Dynamics?

    Get PDF
    Argumentation is a dialogical attempt to bring about a desired change in the beliefs of another agent – that is, to trigger a specific belief revision process in the mind of such agent. However, so far formal models of belief revision widely neglected any systematic comparison with argumentation theories, to the point that even the simplest argumentation structures cannot be captured within such models. In this essay, we endeavour to bring together argumentation and belief revision in the same formal framework, and to highlight the important role played by Toulmin’s layout of argument in fostering such integration

    An Epistemological Study of Theory Change

    Get PDF
    Belief Revision is a well-established field of research that deals with how agents rationally change their minds in the face of new information. The milestone of Belief Revision is a general and versatile formal framework introduced by Alchourrón, Gärdenfors and Makinson, known as the AGM paradigm, which has been, to this date, the dominant model within the field. A main shortcoming of the AGM paradigm, as originally proposed, is its lack of any guidelines for relevant change. To remedy this weakness, Parikh proposed a relevance-sensitive axiom, which applies on splittable theories; i.e., theories that can be divided into syntax-disjoint compartments. The aim of this article is to provide an epistemological interpretation of the dynamics (revision) of splittable theories, from the perspective of Kuhn's inuential work on the evolution of scientific knowledge, through the consideration of principal belief-change scenarios. The whole study establishes a conceptual bridge between rational belief revision and traditional philosophy of science, which sheds light on the application of formal epistemological tools on the dynamics of knowledge

    Extending Dynamic Doxastic Logic: Accommodating Iterated Beliefs And Ramsey Conditionals Within DDL

    Get PDF
    In this paper we distinguish between various kinds of doxastic theories. One distinction is between informal and formal doxastic theories. AGM-type theories of belief change are of the former kind, while Hintikka’s logic of knowledge and belief is of the latter. Then we distinguish between static theories that study the unchanging beliefs of a certain agent and dynamic theories that investigate not only the constraints that can reasonably be imposed on the doxastic states of a rational agent but also rationality constraints on the changes of doxastic state that may occur in such agents. An additional distinction is that between non-introspective theories and introspective ones. Non-introspective theories investigate agents that have opinions about the external world but no higher-order opinions about their own doxasticnstates. Standard AGM-type theories as well as the currently existing versions of Segerberg’s dynamic doxastic logic (DDL) are non-introspective. Hintikka-style doxastic logic is of course introspective but it is a static theory. Thus, the challenge remains to devise doxastic theories that are both dynamic and introspective. We outline the semantics for truly introspective dynamic doxastic logic, i.e., a dynamic doxastic logic that allows us to describe agents who have both the ability to form higher-order beliefs and to reflect upon and change their minds about their own (higher-order) beliefs. This extension of DDL demands that we give up the Preservation condition on revision. We make some suggestions as to how such a non-preservative revision operation can be constructed. We also consider extending DDL with conditionals satisfying the Ramsey test and show that Gärdenfors’ well-known impossibility result applies to such a framework. Also in this case, Preservation has to be given up

    Reasoning Biases, Non-Monotonic Logics and Belief Revision

    Get PDF
    A range of formal models of human reasoning have been proposed in a number of fields such as philosophy, logic, artificial intelligence, computer science, psychology, cognitive science, etc.: various logics (epistemic logics; non-monotonic logics), probabilistic systems (most notably, but not exclusively, Bayesian probability theory), belief revision systems, neural networks, among others. Now, it seems reasonable to require that formal models of human reasoning be (minimally) empirically adequate if they are to be viewed as models of the phenomena in question. How are formal models of human reasoning typically put to empirical test? One way to do so is to isolate a number of key principles of the system, and design experiments to gauge the extent to which participants do or do not follow them in reasoning tasks. Another way is to take relevant existing results and check whether a particular formal model predicts these results. The present investigation provides an illustration of the second kind of empirical testing by comparing two formal models for reasoning -namely the non-monotonic logic known as preferential logic; and a particular version of belief revision theories, screened belief revision -against the reasoning phenomenon known as belief bias in the psychology of reasoning literature: human reasoners typically seek to maintain the beliefs they already hold, and conversely to reject contradicting incoming information. The conclusion of our analysis will be that screened belief revision is more empirically adequate with respect to belief bias than preferential logic and non-monotonic logics in general, as what participants seem to be doing is above all a form of belief management on the basis of background knowledge. The upshot is thus that, while it may offer valuable insights into the nature of human reasoning, preferential logic (and non-monotonic logics in general) is ultimately inadequate as a formal model of the phenomena in question

    Believing Conspiracy Theories: A Bayesian Approach to Belief Protection

    Get PDF
    Despite the harmful impact of conspiracy theories on the public discourse, there is little agreement about their exact nature. Rather than define conspiracy theories as such, we focus on the notion of conspiracy belief. We analyse three recent proposals that identify belief in conspiracy theories as an effect of irrational reasoning. Although these views are sometimes presented as competing alternatives, they share the main commitment that conspiracy beliefs are epistemically flawed because they resist revision given disconfirming evidence. However, the three views currently lack the formal detail necessary for an adequate comparison. In this paper, we bring these views closer together by exploring the rationality of conspiracy belief under a probabilistic framework. By utilising Michael Strevens’ Bayesian treatment of auxiliary hypotheses, we question the claim that the irrationality associated with conspiracy belief is due to a failure of belief revision given disconfirming evidence. We argue that maintaining a core conspiracy belief can be perfectly Bayes-rational when such beliefs are embedded in networks of auxiliary beliefs, which can be sacrificed to protect the more central ones. We propose that the irrationality associated with conspiracy belief lies not in a flawed updating method according to subjective standards but in a failure to converge towards well-confirmed stable belief networks in the long run. We discuss a set of initial reasoning biases as a possible reason for such a failure. Our approach reconciles previously disjointed views, while at the same time offering a formal platform for their further development

    Believing Conspiracy Theories: A Bayesian Approach to Belief Protection

    Get PDF
    Despite the harmful impact of conspiracy theories on the public discourse, there is little agreement about their exact nature. Rather than define conspiracy theories as such, we focus on the notion of conspiracy belief. We analyse three recent proposals that identify belief in conspiracy theories as an effect of irrational reasoning. Although these views are sometimes presented as competing alternatives, they share the main commitment that conspiracy beliefs are epistemically flawed because they resist revision given disconfirming evidence. However, the three views currently lack the formal detail necessary for an adequate comparison. In this paper, we bring these views closer together by exploring the rationality of conspiracy belief under a probabilistic framework. By utilising Michael Strevens’ Bayesian treatment of auxiliary hypotheses, we question the claim that the irrationality associated with conspiracy belief is due to a failure of belief revision given disconfirming evidence. We argue that maintaining a core conspiracy belief can be perfectly Bayes-rational when such beliefs are embedded in networks of auxiliary beliefs, which can be sacrificed to protect the more central ones. We propose that the irrationality associated with conspiracy belief lies not in a flawed updating method according to subjective standards but in a failure to converge towards well-confirmed stable belief networks in the long run. We discuss a set of initial reasoning biases as a possible reason for such a failure. Our approach reconciles previously disjointed views, while at the same time offering a formal platform for their further development
    corecore