14 research outputs found

    A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

    Get PDF
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artifcial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specifc ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may beneft their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fll this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient

    Autonomous Weapon Systems and the Limits of Analogy

    Get PDF
    Autonomous weapon systems are often described either as more independent versions of weapons already in use or as humanoid robotic soldiers. In many ways, these analogies are useful. Analogies and allusions to popular culture make new technologies seem accessible, identify potential dangers, and buttress desired narratives. Most importantly from a legal perspective, analogical reasoning helps stretch existing law to cover developing technologies and minimize law-free zones. But all potential analogies—weapon, combatant, child soldier, animal combatant—fail to address the legal issues raised by autonomous weapon systems, largely because they all misrepresent legally salient traits. Conceiving of autonomous weapon systems as weapons minimizes their capacity for independent and self-determined action, while the combatant, child soldier, and animal combatant comparisons overemphasize it. Furthermore, these discrete and embodied analogies limit our ability to think imaginatively about this new technology and anticipate how it might develop, thereby impeding our ability to properly regulate it. We cannot simply graft legal regimes crafted to regulate other entities onto autonomous weapon systems. Instead, as is often the case when analogical reasoning cannot justifiably stretch extant law to answer novel legal questions, new supplemental law is needed. The sooner we escape the confines of these insufficient analogies, the sooner we can create appropriate and effective regulations for autonomous weapon systems

    From machine ethics to computational ethics

    Get PDF
    Abstract: Research into the ethics of artificial intelligence is often categorized into two subareas – robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value and indeed is an important frontier in developing ethical artificial intelligence systems (AIS). I also show that computational is a distinct, often neglected field in the ethics of AI. In contrast to much of the literature, I argue that the appellation ‘machine ethics’ does not sufficiently capture the entire project of embedding ethics into AI/S and hence the need for computational ethics. This essay is unique for two reasons; first, it offers a philosophical analysis of the subject of computational ethics that is not found in the literature. Second, it offers a finely grained analysis that shows the thematic distinction among robot ethics, machine ethics and computational ethics

    Trustworthy AI Alone Is Not Enough

    Get PDF
    The aim of this book is to make accessible to both a general audience and policymakers the intricacies involved in the concept of trustworthy AI. In this book, we address the issue from philosophical, technical, social, and practical points of view. To do so, we start with a summary definition of Trustworthy AI and its components, according to the HLEG for AI report. From there, we focus in detail on trustworthy AI in large language models, anthropomorphic robots (such as sex robots), and in the use of autonomous drones in warfare, which all pose specific challenges because of their close interaction with humans. To tie these ideas together, we include a brief presentation of the ethical validation scheme for proposals submitted under the Horizon Europe programme as a possible way to address the operationalisation of ethical regulation beyond rigid rules and partial ethical analyses. We conclude our work by advocating for the virtue ethics approach to AI, which we view as a humane and comprehensive approach to trustworthy AI that can accommodate the pace of technological change

    Extensión de Jason para implementar agentes normativos emocionales

    Full text link
    [ES] La mayoría de las decisiones que toman las personas, incluidas las económicas, se basan en gran medida en consideraciones normativo-afectivas, no solo en lo que respecta a la selección de objetivos, sino también de los medios. Sin embargo, aunque las emociones son inherentes al comportamiento humano y también son relevantes cuando se trata de los procesos de toma de decisiones, la relación entre normas y emociones apenas se ha considerado en el campo (o bien área) de los sistemas multiagente, y la mayoría de los sistemas multiagentes normativos no las toman en cuenta como una variable para su cálculo. Así, muchos sistemas normativos modelan agentes que realizan el razonamiento práctico sin tener en cuenta las emociones del agente. Dentro de este marco, este trabajo de fin de grado propone una extensión del lenguaje de programación de sistemas multiagente Jason, que permita implementar un agente normativo emocional (NEA) capaz de manejar tanto normas como emociones. Para ello, se analizan las ventajas de incluir emociones dentro de un sistema normativo y cómo las emociones y las normas se afectan entre sí. En este trabajo se realiza una revisión del trabajo realizado en este campo hasta ahora y se presenta una propuesta de un modelo normativo y emocional propios, que se implementa como una extensión en Jason y por último se presenta un caso de estudio sencillo para mostrar las aportaciones de los agentes NEA.[CA] La majoria de les decisions que prenen les persones incloses les econòmiques, es basen en gran mesura en consideracions normatiu-afectives, no només pel que respecta a la selecció d’objectius, sinó també dels mitjans. No obstant això, encara que les emocions són inherents al comportament humà i també són rellevants quan es tracta dels processos de presa de decisions, la relació entre normes i emocions gairebé no s’ha considerat en el camp (o bé àrea) dels sistemes multiagent, i la majoria dels sistemes multiagents normatius no les tenen en compte com una variable per al seu càlcul. Així, molts sistemes normatius modelen agents que realitzen el raonament pràctic sense tenir en compte les emocions de l’agent. Dins d’aquest marc, aquest treball de fi de grau proposa una extensió del llenguatge de programació de sistemes multiagent Jason, que permeti implementar un agent normatiu emocional (NEA) capaç de manejar tant normes com emocions. Per a això, s’analitzen els avantatges d’incloure emocions dins d’un sistema normatiu i com les emocions i les normes s’afecten entre si. En aquest treball es realitza una revisió de la teball fet en aquest camp fins ara i es presenta una proposta d’un model normatiu i emocional propis, que s’implementa com una extensió en Jason i finalment es presenta un cas d’estudi senzill per mostrar les aportacions dels agents NEA.[EN] Most people’s choices, including economic ones, are largely based on normative-affective considerations, not only with regard to the selection of goals but also of means. However, although emotions are inherent in human behavior, and they are also relevant when dealing with the decision-making processes, the relationship between norms and emotions has hardly been considered in the multiagent field, and most normative multi-agent systems do not take emotions into account, as a variable for their computation. Thus, many normative systems model agents that perform practical reasoning without taking into account the emotions of the agent. Within this framework, this end-of-degree project proposes an extension for the multiagent system programming language Jason, that will allow the implementation of an emotional normative agent (NEA) capable of dealing with both norms and emotions. To do this, we analyze the advantages of including emotions within a normative system and how emotions and norms affect each other. In this work, a review of the work done so far in this field is carried out, and we present a proposal for a normative model as well as an emotional model, which is implemented as an extension in Jason and finally a simple case study is presented to show the contributions of NEA agents.Lliguin León, KY. (2019). Extensión de Jason para implementar agentes normativos emocionales. http://hdl.handle.net/10251/128197TFG

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System

    Making the invisible visable : an analysis of the Home and Community Care Program : a socialist-feminist perspective

    Get PDF
    As the population of Australia ages, social policy and human service practice in the field of aged care is increasingly important and relevant. The Home and community Care (H.A.C.C.) Program was established in 1985 by the Labor Government as a response to a demand for more community services for the frail aged and was designed to reduce the incidence of institutionalisation by increasing home care services. In this way the Home and Community Care Program is seen as linchpin in the Federal Government\u27s initiative to create an efficient and cost-effective aged care policy to contend with the future growth of Australia\u27s ageing population. This thesis argues that there are several assumptions intrinsic to the H.AC.C. Program that are potentially jeopardising and undermining its usefulness. These assumptions are based on familial ideology and nostalgic conceptualizations of \u27the community’ and \u27the family\u27. In addition, these assumptions also involve stereotypic attitudes to women as primary carers and nurturers that ignore, to a great degree, the needs of women themselves. These assumptions, combined with an increasingly neo-conservative view about a reduction in the role of the State and a corresponding increase in family responsibility in welfare, have major implications for Australian women. This socialist-feminist analysis argues that women who are providing care for aged spouses or relatives are doing essential, hard and stressful work, work which is unpaid and often unacknowledged, and that the Australian welfare system is now structured around the invisible labour of such women. Consequentially, the assumption that a social policy program such as H.A.C.C. makes, that is, that there will always be women who care, requires further analysis. This research has revealed that such assumptions have implications for the future development of social policy for the aged in Australia and on the future roles of women in this country. Particular questions which this thesis addresses include, firstly, who actually provides care? Empirical research indicates that the majority of care is provided by one individual, usually the spouse, daughter or daughter-in law. Secondly, what are the assumptions underlying the development and implementation of Home and Community Care social policy in relation to the social construction of caring? Such assumptions are found to include, that the H.A.C.C. Program is premised upon an erroneous concept of the \u27community\u27 and consequentially \u27community care\u27 and that traditional \u27family\u27 and familial values are a precondition to H.A.C.C. service delivery. A socialist-feminist critique offers a deeper analysis of such assumptions by disclosing that the Home and Community Care policies assume that service delivery can be best undertaken by extending the traditional domestic role of women, thus utilising them as an unpaid, or poorly paid, labour force. This analysis also discloses the explicit rejection of the informal service system as having any real economic significance but rather being viewed as ‘complementary’ to the formal service system. Finally, there are future implications of such assumptions for women as primary carers, services users or paid staff within the H.A.C.C. Program which require urgent cognisance in order to develop a future aged care policy in Australia that avoids exploitation of women

    Irony, satire, parody and the grotesque in the music of D.D. Shostakovich

    Get PDF
    corecore