772 research outputs found

    Podmiotowość prawna sztucznej inteligencji?

    Get PDF
    This article examines the question of giving the status of the entity to agents of artificial intelligence, who, technically equipped with the tools allowing them acquisition of rights and taking on liabilities, more and more often participate in trading, especially in online trading. In the light of the ongoing technological development and the increasing number of adjudications on the basis of facts involving the functioning of artificial intelligence systems, it seems reasonable to present the legal and philosophical conceptual framework for discussion on artificial intelligence

    Teaching agents to learn: from user study to implementation

    Get PDF
    Graphical user interfaces have helped center computer use on viewing and editing, rather than on programming. Yet the need for end-user programming continues to grow. Software developers have responded to the demand with a barrage of customizable applications and operating systems. But the learning curve associated with a high level of customizability-even in GUI-based operating systems-often prevents users from easily modifying their software. Ironically, the question has become, "What is the easiest way for end users to program?" Perhaps the best way to customize a program, given current interface and software design, is for users to annotate tasks-verbally or via the keyboard-as they are executing them. Experiments have shown that users can "teach" a computer most easily by demonstrating a desired behavior. But the teaching approach raises new questions about how the system, as a learning machine, will correlate, generalize, and disambiguate a user's instructions. To understand how best to create a system that can learn, the authors conducted an experiment in which users attempt to train an intelligent agent to edit a bibliography. Armed with the results of these experiments, the authors implemented an interactive machine learning system, which they call Configurable Instructible Machine Architecture. Designed to acquire behavior concepts from few examples, Cima keeps users informed and allows them to influence the course of learning. Programming by demonstration reduces boring, repetitive work. Perhaps the most important lesson the authors learned is the value of involving users in the design process. By testing and critiquing their design ideas, users keep the designers focused on their objective: agents that make computer-based work more productive and more enjoyable

    Agent-update Models

    Full text link
    In dynamic epistemic logic (Van Ditmarsch et al., 2008) it is customary to use an action frame (Baltag and Moss, 2004; Baltag et al., 1998) to describe different views of a single action. In this article, action frames are extended to add or remove agents, we call these agent-update frames. This can be done selectively so that only some specified agents get information of the update, which can be used to model several interesting examples such as private update and deception, studied earlier by Baltag and Moss (2004); Sakama (2015); Van Ditmarsch et al. (2012). The product update of a Kripke model by an action frame is an abbreviated way of describing the transformed Kripke model which is the result of performing the action. This is substantially extended to a sum-product update of a Kripke model by an agent-update frame in the new setting. These ideas are applied to an AI problem of modelling a story. We show that dynamic epistemic logics, with update modalities now based on agent-update frames, continue to have sound and complete proof systems. Decision procedures for model checking and satisfiability have expected complexity. A sublanguage is shown to have polynomial space algorithms

    Moral responsibility for unforeseen harms caused by autonomous systems

    Get PDF
    Autonomous systems are machines which embody Artificial Intelligence and Machine Learning and which take actions in the world, independently of direct human control. Their deployment raises a pressing question, which I call the 'locus of moral responsibility' question: who, if anyone, is morally responsible for a harm caused directly by an autonomous system? My specific focus is moral responsibility for unforeseen harms. First, I set up the 'locus of moral responsibility' problem. Unforeseen harms from autonomous systems create a problem for what I call the Standard View, rooted in common sense, that human agents are morally responsible. Unforeseen harms give credence to the main claim of ‘responsibility gap’ arguments – that humans do not meet the control and knowledge conditions of responsibility sufficiently to warrant such an ascription. Second, I argue a delegation framework offers a powerful route for answering the 'locus of moral responsibility' question. I argue that responsibility as attributability traces to the human principals who made the decision to delegate to the system, notwithstanding a later suspension of control and knowledge. These principals would also be blameworthy if their decision to delegate did not serve a purpose that morally justified the subsequent risk- imposition in the first place. Because I argue that different human principals share moral responsibility, I defend a pluralist Standard View. Third, I argue that, while today’s autonomous systems do not meet the agential condition for moral responsibility, it is neither conceptually incoherent nor physically impossible that they might. Because I take it to be a contingent and not a necessary truth that human principals exclusively bear moral responsibility, I defend a soft, pluralist Standard View. Finally, I develop and sharpen my account in response to possible objections, and I explore its wider implications

    Computations and Computers in the Sciences of Mind and Brain

    Get PDF
    Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account. I also believe that the history of computationalism sheds light on the current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms. The two main ideas developed in this dissertation are that (1) computational states are to be identified functionally not semantically, and (2) computing mechanisms are to be studied by functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to identify computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience

    Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans

    Get PDF
    We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda embedding legal knowledge and reasoning in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.Comment: Forthcoming in Northwestern Journal of Technology and Intellectual Property, Volume 2

    What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research

    Get PDF
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders' desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.Comment: 57 pages, 2 figures, 1 table, to be published in Artificial Intelligence, Markus Langer, Daniel Oster and Timo Speith share first-authorship of this pape

    Intelligent Behaviour

    Get PDF
    The notion of intelligence is relevant to several fields of research, including cognitive and comparative psychology, neuroscience, artificial intelligence, and philosophy, among others. However, there is little agreement within and across these fields on how to characterise and explain intelligence. I put forward a behavioural, operational characterisation of intelligence that can play an integrative role in the sciences of intelligence, as well as preserve the distinctive explanatory value of the notion, setting it apart from the related concepts of cognition and rationality. Finally, I examine a popular hypothesis about the underpinnings of intelligence: the capacity to manipulate internal representations of the environment. I argue that the hypothesis needs refinement, and that so refined, it applies only to some forms of intelligence
    corecore