5,975 research outputs found

    The future of killer robots: Are we really losing humanity?

    Get PDF

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph

    Full text link
    Police SWAT teams and Military Special Forces face mounting pressure and challenges from adversaries that can only be resolved by way of ever more sophisticated inputs into tactical operations. Lethal Autonomy provides constrained military/security forces with a viable option, but only if implementation has got proper empirically supported foundations. Autonomous weapon systems can be designed and developed to conduct ground, air and naval operations. This monograph offers some insights into the challenges of developing legal, reliable and ethical forms of autonomous weapons, that address the gap between Police or Law Enforcement and Military operations that is growing exponentially small. National adversaries are today in many instances hybrid threats, that manifest criminal and military traits, these often require deployment of hybrid-capability autonomous weapons imbued with the capability to taken on both Military and/or Security objectives. The Westgate Terrorist Attack of 21st September 2013 in the Westlands suburb of Nairobi, Kenya is a very clear manifestation of the hybrid combat scenario that required military response and police investigations against a fighting cell of the Somalia based globally networked Al Shabaab terrorist group.Comment: 52 pages, 6 Figures, over 40 references, reviewed by a reade

    License to Kill: An Analysis of the Legality of Fully Autonomous Drones in the Context of International Use of Force Law

    Get PDF
    We live in a world of constant technological change; and with this change, comes unknown effects and consequences. This is even truer with weapons and warfare. Indeed, as the means and methods of warfare rapidly modify and transform, the effects and consequences on the laws of war are unknown. This Article addresses one such development in weapon and warfare technology—Fully Autonomous Weapons or “Killer Robots”—and discusses the inevitable use of these weapons within the current international law framework. Recognizing the current, inadequate legal framework, this Article proposes a regulation policy to mitigate the risks associated with Fully Autonomous Weapons. But the debate should not end here; States and the U.N. must work together to adopt a legal framework that coincides with the advancement of technology. This Article starts that discussion

    Intelligent Agents in Military, Defense and Warfare: Ethical Issues and Concerns

    Get PDF
    Due to tremendous progress in digital electronics now intelligent and autonomous agents are gradually being adopted into the fields and domains of the military, defense and warfare. This paper tries to explore some of the inherent ethical issues, threats and some remedial issues about the impact of such systems on human civilization and existence in general. This paper discusses human ethics in contrast to machine ethics and the problems caused by non-sentient agents. A systematic study is made on paradoxes regarding the long-term advantages of such agents in military combat. This paper proposes an international standard which could be adopted by all nations to bypass the adverse effects and solve ethical issues of such intelligent agents

    Should we campaign against sex robots?

    Get PDF
    In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots

    Trustworthy AI Alone Is Not Enough

    Get PDF
    The aim of this book is to make accessible to both a general audience and policymakers the intricacies involved in the concept of trustworthy AI. In this book, we address the issue from philosophical, technical, social, and practical points of view. To do so, we start with a summary definition of Trustworthy AI and its components, according to the HLEG for AI report. From there, we focus in detail on trustworthy AI in large language models, anthropomorphic robots (such as sex robots), and in the use of autonomous drones in warfare, which all pose specific challenges because of their close interaction with humans. To tie these ideas together, we include a brief presentation of the ethical validation scheme for proposals submitted under the Horizon Europe programme as a possible way to address the operationalisation of ethical regulation beyond rigid rules and partial ethical analyses. We conclude our work by advocating for the virtue ethics approach to AI, which we view as a humane and comprehensive approach to trustworthy AI that can accommodate the pace of technological change

    "Out of the loop": autonomous weapon systems and the law of armed conflict

    Get PDF
    The introduction of autonomous weapon systems into the “battlespace” will profoundly influence the nature of future warfare. This reality has begun to draw the attention of the international legal community, with increasing calls for an outright ban on the use of autonomous weapons systems in armed conflict. This Article is intended to help infuse granularity and precision into the legal debates surrounding such weapon systems and their future uses. It suggests that whereas some conceivable autonomous weapon systems might be prohibited as a matter of law, the use of others will be unlawful only when employed in a manner that runs contrary to the law of armed conflict’s prescriptive norms governing the “conduct of hostilities.” This Article concludes that an outright ban of autonomous weapon systems is insupportable as a matter of law, policy, and operational good sense. Indeed, proponents of a ban underestimate the extent to which the law of armed conflict, including its customary law aspect, will control autonomous weapon system operations. Some autonomous weapon systems that might be developed would already be unlawful per se under existing customary law, irrespective of any treaty ban. The use of certain others would be severely limited by that law. Furthermore, an outright ban is premature since no such weapons have even left the drawing board. Critics typically either fail to take account of likely developments in autonomous weapon systems technology or base their analysis on unfounded assumptions about the nature of the systems. From a national security perspective, passing on the opportunity to develop these systems before they are fully understood would be irresponsible. Perhaps even more troubling is the prospect that banning autonomous weapon systems altogether based on speculation as to their future form could forfeit their potential use in a manner that would minimize harm to civilians and civilian objects when compared to non-autonomous weapon systems
    corecore