1,227 research outputs found

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    A Literature Review on New Robotics: Automation from Love to War

    Get PDF

    Why moral philosophers should watch sci-fi movies

    Get PDF
    In this short piece, I explore why we, as moral philosophers, should watch sci-fi movies. Though I do not believe that sci-fi material is ne- cessary for doing good moral philosophy, I give three broad reasons why good sci-fi movies should nevertheless be worth our time. These reasons lie in the fact that they can illustrate moral-philosophical pro- blems, probe into possible solutions and, perhaps most importantly, an- ticipate new issues that may go along with the use of new technologies. For the sake of illustration, I focus, for the most part, on aspects of robo-ethics in the movie I, Robot

    De bello robotico : an ethical assessment of military robotics

    Get PDF
    This article provides a detailed description of robotic weapons and unmanned systems currently used by the U.S. Military and its allies, and an ethical assessment of their actual or potential use on the battlefield. Firstly, trough a review of scientific literature, reports, and newspaper articles, a catalogue of ethical problems related to military robotics is compiled. Secondly, possible solutions for these problems are offered, by relying also on analytic tools provided by the new field of roboethics. Finally, the article explores possible future developments of military robotics and present six reasons why a war between humans and automata is unlikely to happen in the 21st century

    AI ethics and higher education : good practice and guidance for educators, learners, and institutions

    Get PDF
    Artificial intelligence (AI) is exerting unprecedented pressure on the global higher educational landscape in transforming recruitment processes, subverting traditional pedagogy, and creating new research and institutional opportunities. These technologies require contextual and global ethical analysis so that they may be developed and deployed in higher education in just and responsible ways. To-date, these efforts have been largely focused on small parts of the educational environments leaving most of the world out of an essential contribution. This volume acts as a corrective to this and contributes to the building of competencies in ethics education and to broader, global debates about how AI will transform various facets of our lives, not the least of which is higher education

    Killer Robots - Autonomous Weapons and Their Compliance with IHL

    Get PDF
    The pursuit of weapons which distance the soldier from the actual battlefield has been going on ever since the transition from the waging of war using short blades, to the waging of war using bow and arrow. Today, that ambition has reached an almost completion with the ever-increasing number of unmanned, remote-controlled vehicles that are rapidly becoming the most common and prominent method of waging wars. Political incentives of cutting costs of warfare and sparing the lives of soldiers create the last push towards full autonomy. The emergence of increasingly autonomous weapons (AWs) has already generated a heated debate on the legality of these weapons, and two very polarized sides can be easily discerned. The purpose of this thesis is to examine and analyze this debate, to look into the arguments put forth regarding the legality or illegality of autonomous weapons, and examine where the positions are in the debate. Focus is on the three fundamental principles in International Humanitarian Law (IHL): distinction, proportionality and precaution, and I discuss the arguments in both directions. Proponents often claim the ability of AWs to comply with IHL, with the development of sensors, algorithms, software and artificial intelligence (AI), which would allow the machine to satisfactorily distinguish between civilians and combatants, carry out proportionality assessments and to take the required precautions in its actions. Opponents instead argue that the development of AI has overpromised before, that sensors could never be able to distinguish between civilians and combatants in a contemporary battlefield and that proportionality and precaution assessments require a contextual understanding that only humans are capable of. The fundamental disagreement seems to lie in the uncertainty of the development of the software and technology, and the capability of machines to perform as well, or better than, humans. The issue of accountability is also examined in terms of what happens with the responsibility for breaches of IHL when we have assigned the task of targeting and firing, essentially, the life-and-death decision, to a machine. Different propositions such as placing the accountability onto the commander, programmer, manufacturer or even the machine itself are discussed. Issues relating to the moral and ethical aspects of changing the agents of war from humans to robots are also examined, and the possible consequences this might entail – both from a separate moral perspective and as part of the legality assessment, in terms of what would happen with the applicability of IHL if we would change the agents in war. After having examined the debate on legality of AWs, some concluding remarks are drawn on what we are to do with the debate in the near future, where I present some of the more prominently discussed ways forward in terms of handling the emergence of these weapons. Finally, I end with some of my own reflections on what I have found in my analysis of the current debate, and what I believe are the more important aspects to continue discussing in the ongoing debate on the legality of autonomous weapons.Jakten pĂ„ vapen som distanserar soldaten frĂ„n sjĂ€lva slagfĂ€ltet har pĂ„gĂ„tt Ă€nda sedan övergĂ„ngen frĂ„n krigsföring med knivar till krigsföring med pil och bĂ„ge. Idag har denna ambition nĂ€rapĂ„ nĂ„tt fullstĂ€ndighet med det stĂ€ndigt vĂ€xande antal obemannade, fjĂ€rrstyrda farkoster som snabbt hĂ„ller pĂ„ att bli den vanligaste och mest framstĂ„ende metoden att föra krig. Politiska incitament sĂ„som att kapa kostnader av krig och att spara soldaters liv innebĂ€r den sista knuffen mot full autonomi. FramvĂ€xten av alltmer autonoma vapensystem har redan genererat en passionerad debatt om lagligheten av dessa vapen, och tvĂ„ vĂ€ldigt polariserade sidor Ă€r enkelt urskiljbara. Syftet med det hĂ€r arbetet Ă€r att undersöka och analysera den hĂ€r debatten, att titta pĂ„ de argument som förs fram gĂ€llande lagligheten eller olagligheten av autonoma vapen, och att undersöka var positionerna stĂ„r i debatten. Fokus ligger pĂ„ de tre grundlĂ€ggande principerna i internationell humanitĂ€rrĂ€tt (IHL): distinktion, proportionalitet och försiktighet, och jag diskuterar argumenten i bĂ„da riktningarna. FöresprĂ„karna framhĂ€ver ofta förmĂ„gan hos autonoma vapen att efterleva reglerna i IHL, genom utvecklingen av sensorer, algoritmer, mjukvara och artificiell intelligens (AI), vilket skulle göra det möjligt för maskinen att pĂ„ ett tillfredsstĂ€llande sĂ€tt skilja mellan civila och kombattanter, genomföra proportionalitets-bedömningar samt att företa nödvĂ€ndiga försiktighetsĂ„tgĂ€rder i sina aktiviteter. MotstĂ„ndarna menar istĂ€llet att utvecklingen av AI har lovat för mycket förut, att sensorer aldrig skulle kunna skilja mellan civila och kombattanter i ett nutida krigsfĂ€lt och att bedömningar av proportionalitet och försiktighetsĂ„tgĂ€rder krĂ€ver en kontextuell förstĂ„else som endast mĂ€nniskor kan klara av. Den grundlĂ€ggande meningsskiljaktigheten verkar ligga i ovetskapen om utvecklingen av mjukvara och teknologi, och förmĂ„gan hos maskinerna att utföra uppgifter lika bra som, eller bĂ€ttre Ă€n, mĂ€nniskor. FrĂ„gan om ansvar undersöks ocksĂ„ gĂ€llande vad som hĂ€nder med ansvaret för övertrĂ€delser av IHL nĂ€r vi överlĂ„ter uppgiften av att sikta och avfyra, i allt vĂ€sentligt, liv och död-beslut, till en maskin. Olika förslag om var ansvaret ska placeras, sĂ„som pĂ„ befĂ€lhavaren, programmeraren, tillverkaren eller till och med pĂ„ maskinen sjĂ€lv, diskuteras. FrĂ„gor som relaterar till de moraliska och etiska aspekterna av att byta ut agenterna i krig frĂ„n mĂ€nniskor till robotar undersöks ocksĂ„, och de möjliga konsekvenser detta innebĂ€r – bĂ„de frĂ„n ett separat moraliskt perspektiv, men ocksĂ„ som del av laglighetsbedömningen, betrĂ€ffande vad som hĂ€nder med tillĂ€mpligheten av IHL om vi byter agenterna i krig. Efter att ha undersökt debatten om laglighet av autonoma vapen drar jag nĂ„gra slutsatser om hur vi ska fortsĂ€tta debatten i den nĂ€ra förestĂ„ende framtiden, dĂ€r jag presenterar nĂ„gra av de mest diskuterade möjliga vĂ€garna framĂ„t nĂ€r det gĂ€ller att hantera framvĂ€xten av dessa vapen. Slutligen avslutar jag med nĂ„gra egna reflektioner om vad jag har kommit fram till i min analys av debatten, och vad jag tror Ă€r de viktigaste aspekterna att bĂ€ra med sig i den fortsatta debatten om lagligheten av autonoma vapen

    Brave new creatures : a comparative study of Mary Shelley's Frankenstein and the creatures of the new millenium

    Get PDF
    This thesis intends to analyse Mary Shelley‘s creature in her novel Frankenstein, and how the creation of this creature may have adumbrated the birth of present creatures—clones, genomes,1 Artificial Intelligence (AI) creatures like robots and androids—that spring from the latest technological and scientific advances. The Promethean ambition to play God in order to create life persists, and it is present today more than ever before. Within the frame of Cultural Studies and Intertextuality, I dwell upon the similarities and the differences between Mary Shelley®s creature and these ―brave new creatures.‖ Mary Shelley®s Frankenstein was provided with spiritual life and human characteristics such as suffering for love, neglect, and scorn, but the idea of the human as matter was already present in Shelley®s novel: Frankenstein was an ensemble of pieces of corpses. In this thesis I explore to which extent and how the creatures of the new millennium depart from or are similar to the original creature Frankenstein. In Brave New World (1932) Aldous Huxley had already speculated about genetic engineering, test tube babies, and a materialistic conception of human life. Today science and technology challenge us with a future new human race as the cases presented in this study. In view of all this, to ponder what the future may bring about is worth a try

    Silicon Valley Goes to War: Artificial Intelligence, Weapons Systems and the De-Skilled Moral Agent

    Get PDF

    Building TrusTee:The world's most trusted robot

    Get PDF
    This essay explores the requirements for building trustworthy robots and artificial intelligence by drawing from various scientific disciplines and taking human values as the starting-point. It also presents a research and impact agenda
    • 

    corecore