3,401 research outputs found

    Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control

    Get PDF
    This report, published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security, highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called. Written by Dr. Ingvild Bode and Dr. Tom F.A. Watts, authors of an earlier study of air defence systems published with Drone Wars UK, the “Loitering Munitions and Unpredictability” report examines whether the use of automated, autonomous, and AI technologies as part of the global development, testing, and fielding of loitering munitions since the 1980s has impacted emerging practices and social norms of human control over the use of force. It is commonly assumed that the challenges generated by the weaponization of autonomy will materialise in the near to medium term future. The report’s central argument is that whilst most existing loitering munitions are operated by a human who authorizes strikes against system-designated targets, the integration of automated and autonomous technologies into these weapons has created worrying precedents deserving of greater public scrutiny. Loitering munitions – or ‘killer drones’ as they are often popularly known – are expendable uncrewed aircraft which can integrate sensor-based analysis to hover over, detect and explode into targets. These weapons are very important technologies within the international regulatory debates on autonomous weapon systems – a set of technologies defined by Article 36 as weapons “where force is applied automatically on the basis of a sensor-based targeting system”. The earliest loitering munitions such as the Israel Aerospace Industries Harpy are widely considered as being examples of weapons capable of automatically applying force via sensor-based targeting without human intervention. A May 2021 report authored by a UN Panel of Experts on Libya suggests that Kargu-2 loitering munitions manufactured by the Turkish defence company STM may have been “programmed to attack targets without requiring data connectivity between the operator and the munition”. According to research published by Daniel Gettinger, the number of states producing these weapons more than doubled from fewer than 10 in 2017 to almost 24 by mid-2022. The sizeable role which loitering munitions have played in the ongoing fighting between Russia and the Ukraine further underscores the timeliness of this report, having raised debates on whether so called “killer robots are the future of war?” Most manufacturers of these weapons characterize loitering munitions as “human in the loop” systems. The operators of these systems are required to authorize strikes against system-designated targets. The findings of this report, however, suggest that the global trend toward increasing autonomy in targeting has already affected the quality and form of control over the use of force that humans can exercise over specific targeting decisions. Loitering munitions can use automated, autonomous, and to a limited extent, AI technologies to identify, track, and select targets. Some manufacturers also allude to the potential capacity of the systems to attack targets without human intervention. This suggests that human operators of loitering munitions may not always retain an ability to visually verify targets before attack. This report highlights three principal areas of concern: Greater uncertainties regarding how human agents exert control over specific targeting decisions. The use of loitering munitions as anti-personnel weapons and in populated areas. Potential indiscriminate and wide area effects associated with the fielding of loitering munitions. This report’s analysis is drawn from two sources of data: first, a new qualitative data catalogue which compiles the available open-source information about the technical details, development history, and use of autonomy and automation in a global sample of 24 loitering munitions; and second, an in-depth study of how such systems have been used in three recent conflicts – the Libyan Civil War (2014-2020), the 2020 Nagorno-Karabakh War, and the War in Ukraine (2022-). Based on its findings, the authors urge the various stakeholder groups participating in the debates at the United Nations Convention on Certain Conventional Weapons Group of Governmental Experts and elsewhere to develop and adopt legally binding international rules on autonomy in weapon systems, including loitering munitions as a category therein. It is recommended that states: Affirm, retain, and strengthen the current standard of real-time, direct human assessment of, and control over, specific targeting decisions when using loitering munitions and other weapons integrating automated, autonomous, and AI technologies as a firewall for ensuring compliance with legal and ethical norms. Establish controls over the duration and geographical area within which weapons like loitering munitions that can use automated, autonomous, and AI technologies to identify, select, track, and apply force can operate. Prohibit the integration of machine learning and other forms of unpredictable AI algorithms into the targeting functions of loitering munitions because of how this may fundamentally alter the predictability, explainability, and accountability of specific targeting decisions and their outcomes. Establish controls over the types of environments in which sensor-based weapons like loitering munitions that can use automated, autonomous, and AI technologies to identify, select, track, and apply force to targets can operate. Loitering munitions functioning as AWS should not be used in populated areas. Prohibit the use of certain target profiles for sensor-based weapons which use automated, autonomous, and AI technologies in targeting functions. This should include prohibiting the design, testing, and use of autonomy in weapon systems, including loitering munitions, to “target human beings” as well as limiting the use of such weapons “to objects that are military objectives by nature” (ICRC, 2021: 2.). Be more forthcoming in releasing technical details relating to the quality of human control exercised in operating loitering munitions in specific targeting decisions. This should include the sharing, where appropriate, of details regarding the level and character of the training that human operators of loitering munitions receive.  Funding: Research for the report was supported by funding from the European Union’s Horizon 2020 research and innovation programme (under grant agreement No. 852123, AutoNorms project) and from the Joseph Rowntree Charitable Trust. Tom Watts’ revisions to this report were supported by the funding provided by his Leverhulme Trust Early Career Research Fellowship (ECF-2022-135). We also collaborated with Article 36 in writing the report. About the authors: Dr Ingvild Bode is Associate Professor at the Center for War Studies, University of Southern Denmark and a Senior Research Fellow at the Conflict Analysis Research Centre, University of Kent. She is the Principal Investigator of the European Research Council-funded “AutoNorms” project, examining how autonomous weapons systems may change international use of force norms. Her research focuses on understanding processes of normative change, especially through studying practices in relation to the use of force, military Artificial Intelligence, and associated governance demands. More information about Ingvild’s her research is available here. Dr Tom F.A. Watts is a Leverhulme Trust Early Career Researcher based at the Department of Politics, International Relations, and Philosophy at Royal Holloway, University of London. His current project titled “Great Power Competition and Remote Warfare: Change or Continuity in Practice?” (ECF-2022-135) examines the relationship between the use of the strategic practices associated with the concept of remote warfare, the dynamics of change and continuity in contemporary American foreign policy, and autonomy in weapons systems. More information about Tom’s research is available here

    The Journal of ERW and Mine Action Issue 17.2 (2013)

    Get PDF
    Unplanned Explosions | Asia and the Pacific | Underwater Clearanc

    Toward a normative model of Meaningful Human Control over weapons systems

    Get PDF
    The notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems

    A Guide to the International Mine Action Standards 2010

    Get PDF
    IMAS are standards issued by the United Nations to guide the planning, implementation and management of mine action programmes. They are a framework for the development of national mine action standards. This handbook contains all IMAS in brief, explains the purpose of the IMAS and the requirements of the different standards in a clear way, serving as a quick reference for mine action practitioners to understand core IMAS issues

    Real-time support for high performance aircraft operation

    Get PDF
    The feasibility of real-time processing schemes using artificial neural networks (ANNs) is investigated. A rationale for digital neural nets is presented and a general processor architecture for control applications is illustrated. Research results on ANN structures for real-time applications are given. Research results on ANN algorithms for real-time control are also shown

    Management of Residual Explosive Remnants of War (MORE) Issue Briefs

    Get PDF
    This guide is one of several addressing different aspects of Management of Residual Explosive Remnants of War (MORE) and linking with wider information resources held by the GICHD. It should be read in conjunction with other guides in the series. Related publications are indicated in the text and a range of tools, which may help users when addressing their own situations, are identified whenever they are relevant. This guide was released during the June 2015 International Symposium Long-term ERW management in Southeast Asia in Siem Reap, Cambodia

    The Journal of Conventional Weapons Destruction Issue 23.3 (2020)

    Get PDF
    Southeast Asia | Risk Management | Cluster Munitions Remnants Survey | IMAS Training in Vietnam | Mine Risk Education | Victim Assistance | Underwater Clearance | Virtual, Augmented, and Mixed Reality in HMA | HMA in the Gray Zone | IED Clearance Capacity in Afghanista

    Nanotechnology and Preventive Arms Control

    Get PDF
    • 

    corecore