12,357 research outputs found

    AUTONOMOUS WEAPON SYSTEMS: HOW THE HUMAN OPERATOR STAYS INVOLVED

    Get PDF
    Emerging technologies are bringing significant changes to the character of warfare. One such emerging technology, autonomous weapon systems (AWS), is proving increasingly crucial for the United States to maintain its technological superiority over its adversaries. However, AWS brings increasingly complex challenges that demand legal, ethical, and operational considerations. The thesis asks a question related directly to current DOD policy on AWS: how can a human operator apply appropriate judgment during future AWS employment? Using authority, responsibility, and accountability as an analytical framework, the thesis builds upon U.S. policy and strategy with respect to autonomy in weapon systems, international law considerations, and the application of AWS in an operational environment. It uses case studies of the 1988 U.S.S. Vincennes incident and 2003 Operation Iraqi Freedom Patriot fratricides to examine how human judgment was executed alongside autonomous functions within weapon systems, providing lessons learned for AWS research, development, and implementation. The thesis uncovers critical ideas for keeping a human operator from losing control by remaining involved with specific oversight measures—allowing appropriate judgment to be applied during the employment process.Lieutenant Colonel, United States Air ForceApproved for public release. Distribution is unlimited

    The viability of 'embedded Ethics' in robotic military systems without humans in the decision loop

    Get PDF
    Presentation at the "Zagreb Applied Ethics Conference 2017: The Ethics of Robotics and Artificial Intelligence". Matica hrvatska - ZAGREB, CROATIA. 5-7 JUNE, 2017.The social regulation of robotic systems with some elements of inbuilt artificial intelligence, and capable of interacting with the physical world without human control, poses challenges of extraordinary complexity. In particular, when their characteristics make them suitable for being used in military operations as autonomous devices under specific conditions. My purpose is to do a case-study research about the viability of some elements of "embedded Ethics" in different devices, with built-in sensors and a variable range of functionality, starting with Autonomous Weapons Systems (AWS). Based on the revision of recent literature and prototypes, the expected results should give a clearer perspective about the viability of 'embedded Ethics' instructions in the programming of intelligent robotic systems, including those intended for military use. As a preliminary conclusion, the heterogeneity of designs, lethal capacity and degrees of functional complexity in operational contexts –highly unpredictable-, reinforces the importance of preserving human intervention in the decision loop, when the lapse for the sequence of decisions makes it possible. [Additional references available in: http://sl.ugr.es/zaec2017]La regulación social de los sistemas robóticos con elementos de inteligencia artificial incorporados, y capaces de interactuar con el mundo físico sin control humano, plantea desafíos de extraordinaria complejidad. En particular, cuando sus características los hacen aptos para ser utilizadas en operaciones militares como dispositivos autónomos bajo condiciones específicas. Mi propósito es realizar una investigación de casos relevantes para estudiar la viabilidad de algunos elementos de "ética embebida" en diferentes dispositivos, con sensores incorporados y rango variable de funcionalidad, comenzando con los sistemas de armas autónomas (AWS). Una revisión de la literatura reciente y de diversos prototipos en desarrollo podría ofrecer una perspectiva más clara sobre la viabilidad de instrucciones éticas incorporadas en la programación de sistemas robóticos inteligentes, incluidos los destinados al uso militar. Como conclusión preliminar, la heterogeneidad de los diseños, capacidad letal y grados de complejidad funcional en contextos operativos -muy impredecibles- refuerzan la importancia de preservar la intervención humana en el bucle de decisión, cuando el lapso de la secuencia de decisiones lo hace posible. [Additional references available in: http://sl.ugr.es/zaec2017]Supported by R+D Project [ref. FFI2016-79000-P]: "Artificial Intelligence and moral bio-enhancement. Ethical aspects" (IP: F.D. Lara). State Program for the Promotion of Scientific and Technical Research of Excellence, Subprogram of Knowledge Generation. Oct. 2016 - Sept. 2019

    Taking the ‘human’ out of humanitarian? States’ positions on Lethal Autonomous Weapons Systems from an International Humanitarian Law perspective

    Get PDF
    The debate about the legality of Lethal autonomous weapons systems (LAWS) under humanitarian law is still ongoing. This is also due to the developing of autonomous weapon systems which might reach new milestones in autonomous technology. Thus, new legal reviews would be required. The research question of this thesis asks: In how far are lethal autonomous weapons systems in compliance with international humanitarian law and how strict is it interpreted by individual states? At first, this research examines and conceptualizes the characteristics of autonomous weapons and conducts a legal analysis on humanitarian law. LAWS are characterized by their amount of human control, the sophistication of autonomy, and functions they have

    Autonomy in Weapons Systems. The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy

    Get PDF
    The future international security landscape will be critically impacted by the military use of artificial intelligence (AI) and robotics. With the advent of autonomous weapon systems (AWS) and a currently unfolding transformation of warfare, we have reached a turning point and are facing a number of grave new legal, ethical and political concerns. In light of this, the Task Force on Disruptive Technologies and 21st Century Warfare, deployed by the Heinrich Böll Foundation, argues that meaningful human control over weapon systems and the use of force must be retained. In their report, the task force authors offer recommendations to the German government and the German armed forces to that effect

    LAWS and Export Control Regimes: Fit for Purpose?

    Full text link
    Broadening the scope of regulatory options (here: outside of the CCW), this working paper links iPRAW's existing recommendations on human control in the use force to deliberations on export controls for LAWS (i.e. weapon systems with 'autonomy' in their targeting functions) and technological components relevant to LAWS. We highlight some effects of the diffusion or transfer of LAWS and the potential role of national and multilateral export control regulations as a means of mitigating the challenges related to the development and use of LAWS. We also explore the special challenges to the effective implementation of export controls on software based, data-driven technologies, in particular with regard to the general-purpose use of many of the enabling components. With that in mind, we identify and discuss how export control regimes could provide guidance to the participating states on the issue of LAWS

    Focus on Computational Methods in the Context of LAWS

    Full text link
    The report focuses on the underlying techniques behind what is popularly known as Artifical Intelligence (AI), and how they are relevant to LAWS

    Humans in the Loop

    Get PDF
    From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human-in-the-loop” systems. We make four contributions to the discourse. First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decisionmaking process. Regardless of whether the law governing these systems is old or new, inadvertent or intentional, it rarely accounts for the fact that human-machine systems are more than the sum of their parts: they raise their own problems and require their own distinct regulatory interventions. But how to regulate for success? Our third contribution is to highlight the panoply of roles humans might be expected to play, to assist regulators in understanding and choosing among the options. For our fourth contribution, we draw on legal case studies and synthesize lessons from human factors engineering to suggest regulatory alternatives to the MABA-MABA approach. Namely, rather than carelessly placing a human in the loop, policymakers should regulate the human-in-the-loop system
    corecore