1,900 research outputs found

    Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph

    Full text link
    Police SWAT teams and Military Special Forces face mounting pressure and challenges from adversaries that can only be resolved by way of ever more sophisticated inputs into tactical operations. Lethal Autonomy provides constrained military/security forces with a viable option, but only if implementation has got proper empirically supported foundations. Autonomous weapon systems can be designed and developed to conduct ground, air and naval operations. This monograph offers some insights into the challenges of developing legal, reliable and ethical forms of autonomous weapons, that address the gap between Police or Law Enforcement and Military operations that is growing exponentially small. National adversaries are today in many instances hybrid threats, that manifest criminal and military traits, these often require deployment of hybrid-capability autonomous weapons imbued with the capability to taken on both Military and/or Security objectives. The Westgate Terrorist Attack of 21st September 2013 in the Westlands suburb of Nairobi, Kenya is a very clear manifestation of the hybrid combat scenario that required military response and police investigations against a fighting cell of the Somalia based globally networked Al Shabaab terrorist group.Comment: 52 pages, 6 Figures, over 40 references, reviewed by a reade

    ‘The Only Game in Town’ – But is it a Legal One? American Drone Strikes and International Law

    Get PDF
    In 2002 a US Predator drone operating above Afghanistan’s Paktia province spotted three men in Zhawar Kili, a complex slightly north of the infamous Tora Bora cave system, an area used by al-Qaeda leadership to train and regroup. One of the men was tall; supposedly the others were acting reverently towards him. Convinced the tall man was Osama bin Laden a Hellfire missile was fired from the Predator, killing all three men instantly. The tall man was not bin Laden. None of the men were even affiliated with al-Qaeda or the Taliban; they were simply civilians in the wrong place at the wrong time. This strike and many others that are all too similar raise a multitude of questions, both legal and moral, regarding the US lethal drone strike programme. This article attempts to examine the legal implications of US drone strikes; not only in Afghanistan, but further afield from the more traditional and accepted battlefields in Pakistan, Yemen and Somalia

    Autonomous Weapon Systems: A Brief Survey of Developmental, Operational, Legal, and Ethical Issues

    Get PDF
    What does the Department of Defense hope to gain from the use of autonomous weapon systems (AWS)? This Letort Paper explores a diverse set of complex issues related to the developmental, operational, legal, and ethical aspects of AWS. It explores the recent history of the development and integration of autonomous and semi-autonomous systems into traditional military operations. It examines anticipated expansion of these roles in the near future as well as outlines international efforts to provide a context for the use of the systems by the United States. As these topics are well-documented in many sources, this Paper serves as a primer for current and future AWS operations to provide senior policymakers, decisionmakers, military leaders, and their respective staffs an overall appreciation of existing capabilities and the challenges, opportunities, and risks associated with the use of AWS across the range of military operations. Emphasis is added to missions and systems that include the use of deadly force.https://press.armywarcollege.edu/monographs/1303/thumbnail.jp

    Moving beyond privacy and airspace safety: Guidelines for just drones in policing

    Get PDF
    The use of drones offers police forces potential gains in efficiency and safety. However, their use may also harm public perception of the police if drones are refused. Therefore, police forces should consider the perception of bystanders and broader society to maximize drones' potential. This article examines the concerns expressed by members of the public during a field trial involving 52 test participants. Analysis of the group interviews suggests that their worries go beyond airspace safety and privacy, broadly dis-cussed in existing literature and regulations. The interpretation of the results indicates that the perceived justice of drone use is a significant factor in acceptance. Leveraging the concept of organizational justice and data collected, we propose a catalogue of guidelines for just operation of drones to supplement the existing policy. We present the organizational justice perspective as a framework to integrate the concerns of the public and bystanders into legal work. Finally, we discuss the relevance of justice for the legitimacy of the police's actions and provide implications for research and practice

    Artificial Intelligence and civil liability

    Get PDF
    This study – commissioned by the Policy Department C at the request of the Committee on Legal Affairs – analyses the notion of AI-technologies and the applicable legal framework for civil liability. It demonstrates how technology regulation should be technologyspecific, and presents a Risk Management Approach, where the party who is best capable of controlling and managing a technology-related risk is held strictly liable, as a single entry point for litigation. It then applies such approach to four case-studies, to elaborate recommendations

    Group Agency and Artificial Intelligence

    Get PDF
    The aim of this exploratory paper is to discuss a sometimes recognized but still under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical
    • 

    corecore