85 research outputs found

    Justifying Inference to the Best Explanation as a Practical Meta-Syllogism on Dialectical Structures

    Get PDF
    This article discusses how inference to the best explanation (IBE) can be justified as a practical meta-argument. It is, firstly, justified as a *practical* argument insofar as accepting the best explanation as true can be shown to further a specific aim. And because this aim is a discursive one which proponents can rationally pursue in--and relative to--a complex controversy, namely maximising the robustness of one's position, IBE can be conceived, secondly, as a *meta*-argument. My analysis thus bears a certain analogy to Sellars' well-known justification of inductive reasoning (Sellars 1969); it is based on recently developed theories of complex argumentation (Betz 2010, 2011)

    Revamping Hypothetico-Deductivism: A Dialectic Account of Confirmation

    Get PDF
    We use recently developed approaches in argumentation theory in order to revamp the hypothetico-deductive model of confirmation, thus alleviating the well-known paradoxes the H-D account faces. More specifically, we introduce the concept of dialectic confirmation on the background of the so-called theory of dialectical structures (Betz 2010, 2011). Dialectic confirmation generalises hypothetico-deductive confirmation and mitigates the raven paradox, the grue paradox, the tacking paradox, the paradox from conceptual difference, and the problem of novelty

    Fallacies in Scenario Reasoning

    Get PDF
    Policy-makers frequently face substantial uncertainties and are required to cope with alternative scenarios that depict possible future developments. This paper argues that scenario reasoning is prone to suffer from characteristic mistakes. Probabilistic fallacies quantify uncertainties in an illegitimate way. Possibilistic fallacies systematically underestimate the full range of uncertainty, neglect relevant possibilities or attempt to represent a space of possibilities in an oversimplified way. Decision-theoretic fallacies, finally, fail to take the full range of uncertainties into account when justifying decisions, or misinterpret possibility statements by assigning them a special decision-theoretic meaning

    DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models

    Get PDF
    In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence.Comment: A Demo is available at https://huggingface.co/spaces/debatelab/deepa2-demo , the model can be downloaded from https://huggingface.co/debatelab/argument-analyst , and the datasets can be accessed at https://huggingface.co/datasets/debatelab/aaa

    DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models

    Get PDF
    In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst – a T5 model [Raffel et al. 2020] set up and trained within DeepA2 – reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank [Dalvi et al. 2021]. Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model’s uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence

    The moral controversy about Climate Engineering - an argument map. Version 2011-02-24

    Get PDF

    Probabilistic coherence, logical consistency, and Bayesian learning: Neural language models as epistemic agents

    Get PDF
    It is argued that suitably trained neural language models exhibit key properties of epistemic agency: they hold probabilistically coherent and logically consistent degrees of belief, which they can rationally revise in the face of novel evidence. To this purpose, we conduct computational experiments with rankers: T5 models [Raffel et al. 2020] that are pretrained on carefully designed synthetic corpora. Moreover, we introduce a procedure for eliciting a model’s degrees of belief, and define numerical metrics that measure the extent to which given degrees of belief violate (probabilistic, logical, and Bayesian) rationality constraints. While pretrained rankers are found to suffer from global inconsistency (in agreement with, e.g., [Jang et al. 2021]), we observe that subsequent self-training on auto-generated texts allows rankers to gradually obtain a probabilistically coherent belief system that is aligned with logical constraints. In addition, such self-training is found to have a pivotal role in rational evidential learning, too, for it seems to enable rankers to propagate a novel evidence item through their belief systems, successively re-adjusting individual degrees of belief. All this, we conclude, confirms the Rationality Hypothesis, i.e., the claim that suitable trained NLMs may exhibit advanced rational skills. We suggest that this hypothesis has empirical, yet also normative and conceptual ramifications far beyond the practical linguistic problems NLMs have originally been designed to solve

    The moral controversy about Climate Engineering - an argument map. Version 2012-02-13

    Get PDF

    Ethical Aspects of Climate Engineering

    Get PDF
    This study investigates the ethical aspects of deploying and researching into so-called climate engineering methods, i.e. large-scale technical interventions in the climate system with the objective of offsetting anthropogenic climate change. The moral reasons in favour of and against R&D into and deployment of CE methods are analysed by means of argument maps. These argument maps provide an overview of the CE controversy and help to structure the complex debate. Arguments covered in this analysis include: The central justification of R&D; side-effects of R&D and of deployment; lesser-evil argumentation; two-degree target argumentation; efficiency and feasibility considerations; arguments from ethics of risk; arguments from fairness; geo-political objections; critique of technology and civilization; religious, existentialist, and environmental-ethics arguments; alternative justifications of R&D; lack of R&D alternatives; direct justifications of R&D prohibition; priority of mitigation policies

    Judgment aggregation, discursive dilemma and reflective equilibrium: Neural language models as self-improving doxastic agents

    Get PDF
    Neural language models (NLMs) are susceptible to producing inconsistent output. This paper proposes a new diagnosis as well as a novel remedy for NLMs\u27 incoherence. We train NLMs on synthetic text corpora that are created by simulating text production in a society. For diagnostic purposes, we explicitly model the individual belief systems of artificial agents (authors) who produce corpus texts. NLMs, trained on those texts, can be shown to aggregate the judgments of individual authors during pre-training according to sentence-wise vote ratios (roughly, reporting frequencies), which inevitably leads to so-called discursive dilemmas: aggregate judgments are inconsistent even though all individual belief states are consistent. As a remedy for such inconsistencies, we develop a self-training procedure—inspired by the concept of reflective equilibrium—that effectively reduces the extent of logical incoherence in a model\u27s belief system, corrects global mis-confidence, and eventually allows the model to settle on a new, epistemically superior belief state. Thus, social choice theory helps to understand why NLMs are prone to produce inconsistencies; epistemology suggests how to get rid of them
    • …
    corecore