7,852 research outputs found

    Papers, Please and the systemic approach to engaging ethical expertise in videogames

    Get PDF
    Papers, Please, by Lucas Pope (2013), explores the story of a customs inspector in the fictional political regime of Arstotzka. In this paper we explore the stories, systems and moral themes of Papers, Please in order to illustrate the systemic approach to designing videogames for moral engagement. Next, drawing on the Four Component model of ethical expertise from moral psychology, we contrast this systemic approach with the more common scripted approach. We conclude by demonstrating the different strengths and weaknesses that these two approaches have when it comes to designing videogames that engage the different aspects of a player’s moral expertise

    1906 Expressing Disaster in Games

    Get PDF
    1906 is a serious first-person exploration game designed to increase the salience of a user\u27s own mortality while providing an engaging rendition of historical events. The game is set in San Francisco during the Great Earthquake and Fire Disaster of 1906, one of the most devastating natural disasters of our nation\u27s history. Players will have to interact with virtual citizens as well as try to keep their virtual self alive. Using its unique ambience and setting, 1906 allows players to experience life in the aftermath of one of America\u27s greatest natural disasters

    Science and Theology: From a Medical Perspective

    Get PDF

    Computational ethics

    Get PDF
    Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework – computational ethics – that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.publishedVersio

    Punishing Artificial Intelligence: Legal Fiction or Science Fiction

    Get PDF
    Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime

    Making everyday things talk:Speculative conversations into the future of voice interfaces at home

    Get PDF
    What if things had a voice? What if we could talk directly to things instead of using a mediating voice interface such as an Alexa or a Google Assistant? In this paper, we share our insights from talking to a pair of boots, a tampon, a perfume bottle, and toilet paper among other everyday things to explore their conversational capabilities. We conducted Thing Interviews using a more-than-human design approach to discover a thing's perspectives, worldviews and its relations to other humans and nonhumans. Based on our analysis of the speculative conversations, we identified some themes characterizing the emergent qualities of people's relationships with everyday things. We believe the themes presented in the paper may inspire future research on designing everyday things with conversational capabilities at home

    Friend or foe?:on the portrayal of moral agency of artificial intelligence in cinema

    Get PDF
    Abstract. This thesis explores how movies portray AI characters’ moral agency. Moral agency is a term that is used when a person or an entity is capable of moral reasoning performing moral acts. For an agent to be able to perform moral acts it must possess self-conscious awareness and exhibit free will and understanding of moral meaning. A theorietical background of artificial intelligence, moral agency, free will, and the semiotic hierarchy is provided in order to familiarize the reader about the core concepts of the thesis. Semiotic hierarchy provides a theory on meaning making and defines and explains the different levels or requrements involed in meaning making. There are four levels to the semiotic hierarchy: life, consciousness, sign usage and language. Three movies were chosen for the research: I, robot (2004), I am Mother (2019) and Ex Machina (2014). All three movies depict artificial intelligences in various narrative roles and present unique portrayals of the moral dimensions that are involved in creating artificial moral agents. The movies were chosen based on the criteria that the movies have an artificial intelligence that is capable of moral agency and is mobile to some extent. The AIs must also depict various stages of semiotic hierarchy. From each movie notable moments of moral agency and narrative significance are first identified and explained. After this the movies are analysed using content analysis. Two tables of codes were constructed from the data. The first table explores how the characters exhibit e.g. free will, moral agency and semiotic hierarchy, and is constructed by deriving codes from the theoretical background. The second table explores e.g. the narrative role of the AIs and the main moral acts performed by the AIs, and its codes are derived from the movies. These tables allows for an easy comparison of the movies and help identify similarities and differences between them. After analysing the two tables a third table was constructed that divides the AI charaters in two groups: morally justified and morally unjustified, based on similarities the characters share with each other. The morally unjustified AI characters had a notably large influence sphere with multiple active units (drones or robots), based their morality on utilitarianism and were motivated by creating a better, more just world for humans. The morally justified AI characters were all single units, acted based on their self-interest, and were capable of emotions. The former groups moral agency was depicted as a threat and the latter groups moral agency was mostly depicted as a neutral occurence. Additionally, all AI characters advanced on the semiotic hierarchy in a reverse order, meaning language was the easiest level for the AIs to perform. Notably, no AI character was considered to be “alive”. Lastly, a brief discussion is had about the advancements and problems in AI creation by providing real life examples of AIs that have been a topic of discussion in the media in recent years.Ystävä vai ei? : tekoälyistä ja heidän moraalisesta agenttiudesta elokuvissa. Tiivistelmä. Tässä gradussa tutkitaan, miten tekoälyjen moraalinen agenttius käy ilmi elokuvissa. Tekoälyt ovat jo nyt pysyvä osa elämäämme, ja niiden kehitys jatkuu huimaa vauhtia. Niiden moraalisuus, ja etenkin moraalinen agenttius, on kuitenkin vielä vähän tutkittu aihealue. Materiaaliksi valittiin kolme elokuvaa: I, Robot (2004), I am Mother (2019) ja Ex Machina (2014). Gradun taustatutkimukseen käytettiin monipuolisesi filosofisia teorioita kuten semioottinen hierarkia, vapaa tahto ja moraalinen agenttius. Tutkimuksen metodologiaksi valikoitui laadullinen sisältöanalyysi (eng. content analysis), jonka avulla elokuvista identifioitiin kahdenlaisia koodeja: taustatutkimukseen pohjautuvat koodit ja elokuvista esiin nousevat koodit. Nämä koodit jäsenneltiin kahteen taulukkoon, joiden avulla elokuvista löytyviä kohtauksia ja piirteitä pystyttiin vertailemaan keskenään. Taulukoiden vertailusta syntyvän näytön perusteella tekoälyt pystyttiin jakamaan kahteen kategoriaan: niihin, joiden teot olivat moraalisesti oikeutettuja, ja niihin, joiden teot olivat moraalisesti epäoikeutettuja. Epäoikeutetut tekoälyhahmot perustivat moraalisen ymmärryksensä utilitarismiin, pystyivät hallinnoimaan useita yksikköjä eli drooneja kerralla, eivätkä ilmaisseet tunteitaan tai antaneet tunteiden vaikuttaa heidän moraaliseen päätöksentekoonsa. Heidän moraalinen toimintatilansa oli myös laaja, jonka vuoksi he pystyivät tekemään päätöksiä, jotka koskettivat suuria ihmismassoja kerralla. Merkittävintä oikeutetuissa tekoälyhahmoissa oli heidän tunnekykynsä ja se, että he kokivat maailman ihmisenä, eivätkä robottina. Nämä tekoälyt myös hallinnoivat vain yhtä yksikköä kerralla. Tutkimuksen lopussa mainitaan, miten tekoälyjen inhimillistäminen näkyy jo monissa paikoissa. Esimerkiksi Googlen luoma Lamda-keskustelurobotti oli niin vaikuttava, että eräs sen kanssaan työskennellyt tutkija väitti sen omaavan “sielun”. Koska oikean elämän tekoälyn ja tieteisfiktion välinen kuilu on kuroutumassa umpeen teknologian kehittyessä, on tärkeää tutkia, miten reagoimme tekoälyn moraalisuuteen elokuvien ja median kautta

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn
    corecore