8,502 research outputs found

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence

    Chimpanzee Rights: The Philosophers' Brief

    Get PDF
    In December 2013, the Nonhuman Rights Project (NhRP) filed a petition for a common law writ of habeas corpus in the New York State Supreme Court on behalf of Tommy, a chimpanzee living alone in a cage in a shed in rural New York (Barlow, 2017). Under animal welfare laws, Tommy’s owners, the Laverys, were doing nothing illegal by keeping him in those conditions. Nonetheless, the NhRP argued that given the cognitive, social, and emotional capacities of chimpanzees, Tommy’s confinement constituted a profound wrong that demanded remedy by the courts. Soon thereafter, the NhRP filed habeas corpus petitions on behalf of Kiko, another chimpanzee housed alone in Niagara Falls, and Hercules and Leo, two chimpanzees held in research facilities at Stony Brook University. Thus began the legal struggle to move these chimpanzees from captivity to a sanctuary, an effort that has led the NhRP to argue in multiple courts before multiple judges. The central point of contention has been whether Tommy, Kiko, Hercules, and Leo have legal rights. To date, no judge has been willing to issue a writ of habeas corpus on their behalf. Such a ruling would mean that these chimpanzees have rights that confinement might violate. Instead, the judges have argued that chimpanzees cannot be bearers of legal rights because they are not, and cannot be persons. In this book we argue that chimpanzees are persons because they are autonomous

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    EMERGING THE EMERGENCE SOCIOLOGY: The Philosophical Framework of Agent-Based Social Studies

    Get PDF
    The structuration theory originally provided by Anthony Giddens and the advance improvement of the theory has been trying to solve the dilemma came up in the epistemological aspects of the social sciences and humanity. Social scientists apparently have to choose whether they are too sociological or too psychological. Nonetheless, in the works of the classical sociologist, Emile Durkheim, this thing has been stated long time ago. The usage of some models to construct the bottom-up theories has followed the vast of computational technology. This model is well known as the agent based modeling. This paper is giving a philosophical perspective of the agent-based social sciences, as the sociology to cope the emergent factors coming up in the sociological analysis. The framework is made by using the artificial neural network model to show how the emergent phenomena came from the complex system. Understanding the society has self-organizing (autopoietic) properties, the Kohonen’s self-organizing map is used in the paper. By the simulation examples, it can be seen obviously that the emergent phenomena in social system are seen by the sociologist apart from the qualitative framework on the atomistic sociology. In the end of the paper, it is clear that the emergence sociology is needed for sharpening the sociological analysis in the emergence sociology

    MORAL FREEDOM. FREE TO CHOOSE IN THE ALGORITHMIC ERA

    Get PDF
    Che cos\u2019\ue8 la libert\ue0 morale? Quali sono le sue condizioni di possibilit\ue0? Il progresso e, soprattutto, l\u2019uso esponenziale di tecnologie dell\u2019informazione e della comunicazione governate da algoritmi (ICTs digitali) la promuovono o la indeboliscono? Siamo di fronte a una nuova sfida etica? Se s\uec, come dovremmo rispondere a essa? Queste sono alcune delle domande principali che il presente lavoro intende affrontare. Questi interrogativi risultano sempre pi\uf9 ineludibili nelle nostre societ\ue0 informazionali contemporanee, dove le persone dipendono in misura crescente dalle ICTs digitali e, dunque, risultano inevitabilmente esposte al loro invisibile ma sempre pi\uf9 influente design algoritmico. \uc8 oggi infatti indiscutibile che gli algoritmi non solo mediano ogni aspetto della nostra vita, ma detengono un potenziale sia epistemologico che ontologico nel rimodellare in modo pervasivo, incessante e profondo il tessuto della nostra realt\ue0, il modo in cui percepiamo e dunque conosciamo e facciamo esperienza del mondo, degli altri e perfino di noi stessi, fondendosi e ridefinendo dall\u2019interno le nostre pratiche quotidiane, il modo in cui svolgiamo compiti, in cui prendiamo decisioni semplici e intuitive o compiamo scelte complesse e significative. L\u2019obiettivo della presente dissertazione \ue8 quello d\u2019indagare, alla luce di questo progresso e uso pervasivo delle ICTs algoritmiche, una delle questioni pi\uf9 cruciali nelle nostre societ\ue0 contemporanee: la questione della nostra libert\ue0 e, in particolare, della nostra libert\ue0 morale, vale a dire, della nostra libert\ue0 di diventare agenti morali autentici e, nello specifico, di sviluppare identit\ue0 morali autentiche, ovvero la nostra libert\ue0 di scegliere e agire secondo valori e ragioni che possiamo avallare come motivi delle nostre scelte e azioni, mantenendo cos\uec la nostra \u201cautorialit\ue0 morale\u201d su di esse. La tesi specifica che sostengo \ue8 che le ICTs basate su algoritmi possono mettere a repentaglio la nostra libert\ue0 morale, minando le condizioni di possibilit\ue0 che ne garantiscono l\u2019esercizio \u2013 almeno a una soglia minima. Al fine di argomentare la mia tesi, essendo la libert\ue0 morale una questione poco esplorata in autonomia, nel primo capitolo, elaboro un account sia positivo che negativo di libert\ue0 morale, traendo spunti dall\u2019analisi delle teorie che indagano la dimensione morale della libert\ue0 di scelta e azione in due dibattiti filosofici chiave, sebbene dal portato distinto, il dibattito sul libero arbitrio e quello sulla libert\ue0 sociale e politica. Nello specifico, indago la dimensione morale della scelta e dell\u2019agire umano soprattutto per ci\uf2 che concerne le condizioni di possibilit\ue0 (i.e., sine qua non) che ne consentono il libero esercizio \u2013 che sostengo essere: la disponibilit\ue0 di opzioni alternative moralmente eterogenee e l\u2019autonomia morale come autodeterminazione relazionale \u2013, e infine definisco la libert\ue0 morale come un valore etico-normativo che richiede di essere protetto da attuali o potenziali forme di impedimento. Nel secondo capitolo, vaglio se le ICTs algoritmiche stiano creando una nuova forma di impedimento alla nostra libert\ue0 morale, considerando nello specifico gli algoritmi di machine learning, con un focus particolare su tre tecniche algoritmiche di personalizzazione informazionale: la profilazione algoritmica, gli algoritmi di classificazione e filtro e, infine, i sistemi di raccomandazione (RS). A tal fine, pongo in luce come la governance algoritmica che sta emergendo nelle nostre societ\ue0 informazionali si stia strutturando in ci\uf2 che definisco architetture algoritmiche di scelta e ne argomento l\u2019azione non solo nel ristrutturare i contesti in cui scegliamo e agiamo, ma nell\u2019intaccare profondamente fino a compromettere le condizioni alla base dell\u2019esercizio della nostra libert\ue0 morale, dando origine sia a un impatto epistemologico sul soggetto le cui opzioni risultano pre-scelte algoritmicamente che, in alcuni casi, a una possibile sospensione dell\u2019autonomia morale (come approvazione o endorsement), mettendo cos\uec in discussione la nostra libert\ue0 morale. Nel terzo capitolo, infine, esploro come questo impatto algoritmico possa costringere la nostra libert\ue0 di scelta e azione come agenti morali autentici al punto tale da costituire un vero e proprio nuovo impedimento alla nostra libert\ue0 morale, o predeterminismo algoritmico, e ne delineo le conseguenze in alcuni domini sociali in cui le ICTs algoritmiche risultano oggi ampiamente applicate. Le ultime due sezioni del capitolo concludono sviluppando una possibile risposta alla sfida etica delineata, in primo luogo, attraverso l\u2019introduzione nel dibattito sulla privacy informazionale \u2013 quale strumento legale di tutela della nostra libert\ue0 \u2013 di una nuova lente concettuale, o privacy morale, declinata in tre assiomi volti a disegnare una zona di protezione specifica per la nostra libert\ue0 morale e, infine, nella definizione degli agenti sociali chiamati all\u2019operazionalizzazione tecnica e istituzionale dei criteri individuati al fine di prevenire la minaccia delineata e, dunque, di preservarci quali liberi di scegliere e agire come autentici agenti morali nelle societ\ue0 algoritmiche contemporane

    Advanced Artificial Intelligence and Contract

    Get PDF
    The aim of this article is to inquire whether contract law can operate in a state of affairs in which artificial general intelligence (AGI) exists and has the cognitive abilities to interact with humans to exchange promises or otherwise engage in the sorts of exchanges typically governed by contract law. AGI is a long way off but its emergence may be sudden and come in the lifetimes of some people alive today. How might contract law adapt to a situation in which at least one of the contract parties could, from the standpoint of capacity to engage in promising and exchange, be an AGI? This is not a situation in which AI operates as an agent of a human or a firm, a frequent occurrence right now. Rather, the question is whether an AGI could constitute a principal – a contract party on its own. Contract law is a good place to start a discussion about adapting the law for an AGI future because it already incorporates a version of what is known as weak AI in its objective standard for contract formation and interpretation. Contract law in some limited sense takes on issues of relevance from philosophy of mind. AGI holds the potential to transform a solution to an epistemological problem of how to prove a contract exists into solution to an ontological problem about the capacity to contract. An objection might be that contract law presupposes the existence of a person the law recognizes as possessing the capacity to contract. Contract law itself may not be able to answer the prior question of legally recognized personhood. The answer will be to focus on how AGI cognitive architecture could be designed for compatibility for human interaction. This article focuses on that question as well

    Perceptions of Violations by Artificial and Human Actors across Moral Foundations

    Get PDF
    Artificial agents such as robots, chatbots, and artificial intelligence systems can be the perpetrators of a range of moral violations traditionally limited to human actors. This paper explores how people perceive the same moral violations differently for artificial agent and human perpetrators by addressing three research questions: How wrong are moral foundation violations by artificial agents compared to human perpetrators? Which moral foundations do artificial agents violate compared to human perpetrators? What leads to increased blame for moral foundation violations by artificial agents compared to human perpetrators? We adapt 18 human-perpetrated moral violation scenarios that differ by the moral foundation violated (harm, unfairness, betrayal, subversion, degradation, and oppression) to create 18 agent-perpetrated moral violation scenarios. Two studies compare human-perpetrated to agent-perpetrated scenarios. They reveal that agent-perpetrated violations are more often perceived as not wrong or violating a different foundation than their human counterparts. People are less likely to classify violations by artificial agents as oppression and subversion, the foundations that deal the most with group hierarchy. Finally, artificial agents are blamed less than humans across moral foundations, and this blame is based more on the agent\u27s ability and intention for every moral foundation except harm
    • …
    corecore