7 research outputs found

    Why missing premises can be missed: Evaluating arguments by determining their lever

    Get PDF
    By taking an argument to consist of one premise and one conclusion, the Periodic Table of Arguments (PTA) excludes from its conceptualization the element traditionally called the ‘connecting premise’ or ‘warrant’ – which is often missing from the discourse. This paper answers the question of how to evaluate the underlying mechanism of an argument by presenting a method for formulating its ‘argumentative lever’ based on an identification of its type

    Institutionalized argumentative reasonableness - Commentary on Reijven

    Get PDF

    Connecting ethics and epistemology of AI

    Get PDF
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and other normative considerations, such as intersectoral vulnerabilities, at critical stages of the whole process from design and implementation to use and assessment. To connect ethics and epistemology of AI, we perform a double shift of focus. First, we move from trusting the output of an AI system to trusting the process that leads to the outcome. Second, we move from expert assessment to more inclusive assessment strategies, aiming to facilitate expert and non-expert assessment. Together, these two moves yield a framework usable for experts and non-experts when they inquire into relevant epistemological and ethical aspects of AI systems. We dub our framework epistemology-cum-ethics to signal the equal importance of both aspects. We develop it from the vantage point of the designers: how to create the conditions to internalize values into the whole process of design, implementation, use, and assessment of an AI system, in which values (epistemic and non-epistemic) are explicitly considered at each stage and inspectable by every salient actor involved at any moment

    Connecting ethics and epistemology of AI

    Get PDF
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an epistemology for glass box AI that explicitly considers how to incorporate values and other normative considerations at key stages of the whole process from design to implementation and use. To assess epistemological and ethical aspects of AI systems, we shift focus from trusting the output of such a system, to trusting the process that leads to such outcome. To do so, we build on ‘Computational Reliabilism’ and on Creel’s account of transparency. Further, we draw on argumentation theory, specifically about how to model the handling, eliciting, and interrogation of the authority and trustworthiness of expert opinion in order to elucidate how the design process of AI systems can be tested critically. By combining these insights, we develop a procedure for assessing the reliability and transparency of algorithmic decision-making that functions as a tool for experts and non-experts to inquiring into relevant epistemological and ethical aspects of AI systems. We then consider normative questions such as how social consequences that harm intersectionally vulnerable populations can be modelled in the context of AI design and implementation, drawing on work on the literature on inductive risk in the philosophy of science to think them through. Our epistemology-cum-ethics is developed from the vantage point of the conditions for enabling ethical assessment to be built into the whole process of design, implementation, and use of an AI system, in which values (epistemic and non-epistemic) are explicitly considered at each stage and by every salient actor involved. This approach, we think, complements other valuable accounts that target post-hoc ethical assessment

    Constructing a Periodic Table of Arguments

    Get PDF
    The existing classifications of arguments are unsatisfying in a number of ways. This paper proposes an alternative in the form of a Periodic Table of Arguments. The newly developed table can be used as a systematic and comprehensive point of reference for the analysis, evaluation and production of argumentative discourse as well as for various kinds of empirical and computational research in the field of argumentation theory
    corecore