85 research outputs found

    The Philosophical Foundations of PLEN: A Protocol-theoretic Logic of Epistemic Norms

    Full text link
    In this dissertation, I defend the protocol-theoretic account of epistemic norms. The protocol-theoretic account amounts to three theses: (i) There are norms of epistemic rationality that are procedural; epistemic rationality is at least partially defined by rules that restrict the possible ways in which epistemic actions and processes can be sequenced, combined, or chosen among under varying conditions. (ii) Epistemic rationality is ineliminably defined by procedural norms; procedural restrictions provide an irreducible unifying structure for even apparently non-procedural prescriptions and normative expressions, and they are practically indispensable in our cognitive lives. (iii) These procedural epistemic norms are best analyzed in terms of the protocol (or program) constructions of dynamic logic. I defend (i) and (ii) at length and in multi-faceted ways, and I argue that they entail a set of criteria of adequacy for models of epistemic dynamics and abstract accounts of epistemic norms. I then define PLEN, the protocol-theoretic logic of epistemic norms. PLEN is a dynamic logic that analyzes epistemic rationality norms with protocol constructions interpreted over multi-graph based models of epistemic dynamics. The kernel of the overall argument of the dissertation is showing that PLEN uniquely satisfies the criteria defended; none of the familiar, rival frameworks for modeling epistemic dynamics or normative concepts are capable of satisfying these criteria to the same degree as PLEN. The overarching argument of the dissertation is thus a theory-preference argument for PLEN

    Imperative Statics and Dynamics

    Get PDF
    Imperatives are linguistic devices used by an authority (speaker) to express wishes, requests, commands, orders, instructions, and suggestions to a subject (addressee). This essay's goal is to tentatively address some of the following questions about the imperative. METASEMANTIC. What is the menu of options for understanding fundamental semantic notions like satisfaction, truth-conditions, validity, and entailment in the context of imperatives? Are there good imperative arguments, and, if so, how are they to be characterized? What are the options for understanding the property that an account of good imperative arguments is supposed to track? What constraints on a semantic analysis of the imperative do different positions on the metasemantic issues impose? SEMANTIC. How might we implement metasemantic postures in a rigorous formal system? How much can we do using familiar tools from deontic modal logic? How much leverage over semantic questions can we gain by introducing tools from natural language semantics—ordering sources, dyadic modal operators, salient alternatives, and the like—into a formal semantics for an imperative object language? How much leverage can we gain by introducing tools from rather less-utilized areas of modal logic—devices for representing actions and planning in time, modal operators constructed from action-terms, and the like—into the analysis? DYNAMIC. How do imperatives succeed in performing the speech acts they are used to perform? How do imperatives update discourses? How can we leverage an account of imperative discourse update in giving a dynamic semantics for the imperative? Is there anything about the imperative that demands a dynamic semantic treatment

    Zero-one laws with respect to models of provability logic and two Grzegorczyk logics

    Get PDF
    It has been shown in the late 1960s that each formula of first-order logic without constants and function symbols obeys a zero-one law: As the number of elements of finite models increases, every formula holds either in almost all or in almost no models of that size. Therefore, many properties of models, such as having an even number of elements, cannot be expressed in the language of first-order logic. Halpern and Kapron proved zero-one laws for classes of models corresponding to the modal logics K, T, S4, and S5 and for frames corresponding to S4 and S5. In this paper, we prove zero-one laws for provability logic and its two siblings Grzegorczyk logic and weak Grzegorczyk logic, with respect to model validity. Moreover, we axiomatize validity in almost all relevant finite models, leading to three different axiom systems

    On the computational complexity of ethics: moral tractability for minds and machines

    Get PDF
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative ethics through the lens of computational complexity. First, we introduce computational complexity for the uninitiated reader and discuss how the complexity of ethical problems can be framed within Marr’s three levels of analysis. We then study a range of ethical problems based on consequentialism, deontology, and virtue ethics, with the aim of elucidating the complexity associated with the problems themselves (e.g., due to combinatorics, uncertainty, strategic dynamics), the computational methods employed (e.g., probability, logic, learning), and the available resources (e.g., time, knowledge, learning). The results indicate that most problems the normative frameworks pose lead to tractability issues in every category analyzed. Our investigation also provides several insights about the computational nature of normative ethics, including the differences between rule- and outcome-based moral strategies, and the implementation-variance with regard to moral resources. We then discuss the consequences complexity results have for the prospect of moral machines in virtue of the trade-off between optimality and efficiency. Finally, we elucidate how computational complexity can be used to inform both philosophical and cognitive-psychological research on human morality by advancing the moral tractability thesis

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System

    Logics of Responsibility

    Get PDF
    The study of responsibility is a complicated matter. The term is used in different ways in different fields, and it is easy to engage in everyday discussions as to why someone should be considered responsible for something. Typically, the backdrop of these discussions involves social, legal, moral, or philosophical problems. A clear pattern in all these spheres is the intent of issuing standards for when---and to what extent---an agent should be held responsible for a state of affairs. This is where Logic lends a hand. The development of expressive logics---to reason about agents' decisions in situations with moral consequences---involves devising unequivocal representations of components of behavior that are highly relevant to systematic responsibility attribution and to systematic blame-or-praise assignment. To put it plainly, expressive syntactic-and-semantic frameworks help us analyze responsibility-related problems in a methodical way. This thesis builds a formal theory of responsibility. The main tool used toward this aim is modal logic and, more specifically, a class of modal logics of action known as stit theory. The underlying motivation is to provide theoretical foundations for using symbolic techniques in the construction of ethical AI. Thus, this work means a contribution to formal philosophy and symbolic AI. The thesis's methodology consists in the development of stit-theoretic models and languages to explore the interplay between the following components of responsibility: agency, knowledge, beliefs, intentions, and obligations. Said models are integrated into a framework that is rich enough to provide logic-based characterizations for three categories of responsibility: causal, informational, and motivational responsibility. The thesis is structured as follows. Chapter 2 discusses at length stit theory, a logic that formalizes the notion of agency in the world over an indeterministic conception of time known as branching time. The idea is that agents act by constraining possible futures to definite subsets. On the road to formalizing informational responsibility, Chapter 3 extends stit theory with traditional epistemic notions (knowledge and belief). Thus, the chapter formalizes important aspects of agents' reasoning in the choice and performance of actions. In a context of responsibility attribution and excusability, Chapter 4 extends epistemic stit theory with measures of optimality of actions that underlie obligations. In essence, this chapter formalizes the interplay between agents' knowledge and what they ought to do. On the road to formalizing motivational responsibility, Chapter 5 adds intentions and intentional actions to epistemic stit theory and reasons about the interplay between knowledge and intentionality. Finally, Chapter 6 merges the previous chapters' formalisms into a rich logic that is able to express and model different modes of the aforementioned categories of responsibility. Technically, the most important contributions of this thesis lie in the axiomatizations of all the introduced logics. In particular, the proofs of soundness & completeness results involve long, step-by-step procedures that make use of novel techniques

    Logics for AI and Law: Joint Proceedings of the Third International Workshop on Logics for New-Generation Artificial Intelligence and the International Workshop on Logic, AI and Law, September 8-9 and 11-12, 2023, Hangzhou

    Get PDF
    This comprehensive volume features the proceedings of the Third International Workshop on Logics for New-Generation Artificial Intelligence and the International Workshop on Logic, AI and Law, held in Hangzhou, China on September 8-9 and 11-12, 2023. The collection offers a diverse range of papers that explore the intersection of logic, artificial intelligence, and law. With contributions from some of the leading experts in the field, this volume provides insights into the latest research and developments in the applications of logic in these areas. It is an essential resource for researchers, practitioners, and students interested in the latest advancements in logic and its applications to artificial intelligence and law

    Deontic Modality in Rationality and Reasoning

    Get PDF
    Deontic Modality in Rationality and Reasoning Lay Summary Alessandra Marra The present dissertation investigates certain facets of the logical structure of oughts – where “ought” is used as a noun, roughly meaning obligation. I do so by following two lines of inquiry. The first part of the thesis places oughts in the context of practical rationality. The second part of the thesis concerns the inference rules governing arguments about oughts, and specifically the inference rule of Reasoning by Cases. These two lines of inquiry, together, aim to expound upon oughts in rationality and reasoning. The methodology used in this dissertation is the one of philosophical logic, in which logical, qualitative models are developed to support and foster conceptual analysis. The dissertation consists of four main chapters. The first two chapters are devoted to the role of oughts in practical rationality. I focus on the so-called Enkratic principle of rationality, which – in its most general formulation – requires that if an agent believes sincerely and with conviction that she ought to do X, then she intends to X. I develop a logical framework to investigate the (static and dynamic) relation between those oughts believed by the agent and her intentions. It is shown that, under certain minimal assumptions, the Enkratic principle of rationality is a principle of limited validity. The following two chapters of the dissertation constitute a study of the classical inference rule of Reasoning by Cases, which – in its simplest form – moves from the premises “A or B”, “if A then C” and “if B then C” to the conclusion “C”. Recent literature has called the validity of Reasoning by Cases into question, with the most influential counterexample being the so-called Miners’ Puzzle – an instance of Reasoning by Cases where “C” involves oughts. I provide a unifying explanation of why the Miners’ Puzzle emerges. It is shown that, within specific boundaries, Reasoning by Cases is a valid inference rule
    • …
    corecore