16,511 research outputs found

    Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph

    Full text link
    Police SWAT teams and Military Special Forces face mounting pressure and challenges from adversaries that can only be resolved by way of ever more sophisticated inputs into tactical operations. Lethal Autonomy provides constrained military/security forces with a viable option, but only if implementation has got proper empirically supported foundations. Autonomous weapon systems can be designed and developed to conduct ground, air and naval operations. This monograph offers some insights into the challenges of developing legal, reliable and ethical forms of autonomous weapons, that address the gap between Police or Law Enforcement and Military operations that is growing exponentially small. National adversaries are today in many instances hybrid threats, that manifest criminal and military traits, these often require deployment of hybrid-capability autonomous weapons imbued with the capability to taken on both Military and/or Security objectives. The Westgate Terrorist Attack of 21st September 2013 in the Westlands suburb of Nairobi, Kenya is a very clear manifestation of the hybrid combat scenario that required military response and police investigations against a fighting cell of the Somalia based globally networked Al Shabaab terrorist group.Comment: 52 pages, 6 Figures, over 40 references, reviewed by a reade

    Unifying an Introduction to Artificial Intelligence Course through Machine Learning Laboratory Experiences

    Full text link
    This paper presents work on a collaborative project funded by the National Science Foundation that incorporates machine learning as a unifying theme to teach fundamental concepts typically covered in the introductory Artificial Intelligence courses. The project involves the development of an adaptable framework for the presentation of core AI topics. This is accomplished through the development, implementation, and testing of a suite of adaptable, hands-on laboratory projects that can be closely integrated into the AI course. Through the design and implementation of learning systems that enhance commonly-deployed applications, our model acknowledges that intelligent systems are best taught through their application to challenging problems. The goals of the project are to (1) enhance the student learning experience in the AI course, (2) increase student interest and motivation to learn AI by providing a framework for the presentation of the major AI topics that emphasizes the strong connection between AI and computer science and engineering, and (3) highlight the bridge that machine learning provides between AI technology and modern software engineering

    Clues About Bluffing in Clue: Is Conventional Wisdom Wise?

    Full text link
    We have used the board game Clue as a pedagogical tool in our course on Artificial Intelligence to teach formal logic through the development of logic-based computational game-playing agents. The development of game-playing agents allows us to experimentally test many game-play strategies and we have encountered some surprising results that refine “conventional wisdom” for playing Clue. In this paper we consider the effect of the oft-used strategy wherein a player uses their own cards when making suggestions (i.e., “bluffing”) early in the game to mislead other players or to focus on acquiring a particular kind of knowledge. We begin with an intuitive argument against this strategy together with a quantitative probabilistic analysis of this strategy’s cost to a player that both suggest “bluffing” should be detrimental to winning the game. We then present our counter-intuitive simulation results from playing computational agents that “bluff” against those that do not that show “bluffing” to be beneficial. We conclude with a nuanced assessment of the cost and benefit of “bluffing” in Clue that shows the strategy, when used correctly, to be beneficial and, when used incorrectly, to be detrimental

    Technical Workshop: Advanced Helicopter Cockpit Design

    Get PDF
    Information processing demands on both civilian and military aircrews have increased enormously as rotorcraft have come to be used for adverse weather, day/night, and remote area missions. Applied psychology, engineering, or operational research for future helicopter cockpit design criteria were identified. Three areas were addressed: (1) operational requirements, (2) advanced avionics, and (3) man-system integration

    Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control

    Get PDF
    This report, published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security, highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called. Written by Dr. Ingvild Bode and Dr. Tom F.A. Watts, authors of an earlier study of air defence systems published with Drone Wars UK, the “Loitering Munitions and Unpredictability” report examines whether the use of automated, autonomous, and AI technologies as part of the global development, testing, and fielding of loitering munitions since the 1980s has impacted emerging practices and social norms of human control over the use of force. It is commonly assumed that the challenges generated by the weaponization of autonomy will materialise in the near to medium term future. The report’s central argument is that whilst most existing loitering munitions are operated by a human who authorizes strikes against system-designated targets, the integration of automated and autonomous technologies into these weapons has created worrying precedents deserving of greater public scrutiny. Loitering munitions – or ‘killer drones’ as they are often popularly known – are expendable uncrewed aircraft which can integrate sensor-based analysis to hover over, detect and explode into targets. These weapons are very important technologies within the international regulatory debates on autonomous weapon systems – a set of technologies defined by Article 36 as weapons “where force is applied automatically on the basis of a sensor-based targeting system”. The earliest loitering munitions such as the Israel Aerospace Industries Harpy are widely considered as being examples of weapons capable of automatically applying force via sensor-based targeting without human intervention. A May 2021 report authored by a UN Panel of Experts on Libya suggests that Kargu-2 loitering munitions manufactured by the Turkish defence company STM may have been “programmed to attack targets without requiring data connectivity between the operator and the munition”. According to research published by Daniel Gettinger, the number of states producing these weapons more than doubled from fewer than 10 in 2017 to almost 24 by mid-2022. The sizeable role which loitering munitions have played in the ongoing fighting between Russia and the Ukraine further underscores the timeliness of this report, having raised debates on whether so called “killer robots are the future of war?” Most manufacturers of these weapons characterize loitering munitions as “human in the loop” systems. The operators of these systems are required to authorize strikes against system-designated targets. The findings of this report, however, suggest that the global trend toward increasing autonomy in targeting has already affected the quality and form of control over the use of force that humans can exercise over specific targeting decisions. Loitering munitions can use automated, autonomous, and to a limited extent, AI technologies to identify, track, and select targets. Some manufacturers also allude to the potential capacity of the systems to attack targets without human intervention. This suggests that human operators of loitering munitions may not always retain an ability to visually verify targets before attack. This report highlights three principal areas of concern: Greater uncertainties regarding how human agents exert control over specific targeting decisions. The use of loitering munitions as anti-personnel weapons and in populated areas. Potential indiscriminate and wide area effects associated with the fielding of loitering munitions. This report’s analysis is drawn from two sources of data: first, a new qualitative data catalogue which compiles the available open-source information about the technical details, development history, and use of autonomy and automation in a global sample of 24 loitering munitions; and second, an in-depth study of how such systems have been used in three recent conflicts – the Libyan Civil War (2014-2020), the 2020 Nagorno-Karabakh War, and the War in Ukraine (2022-). Based on its findings, the authors urge the various stakeholder groups participating in the debates at the United Nations Convention on Certain Conventional Weapons Group of Governmental Experts and elsewhere to develop and adopt legally binding international rules on autonomy in weapon systems, including loitering munitions as a category therein. It is recommended that states: Affirm, retain, and strengthen the current standard of real-time, direct human assessment of, and control over, specific targeting decisions when using loitering munitions and other weapons integrating automated, autonomous, and AI technologies as a firewall for ensuring compliance with legal and ethical norms. Establish controls over the duration and geographical area within which weapons like loitering munitions that can use automated, autonomous, and AI technologies to identify, select, track, and apply force can operate. Prohibit the integration of machine learning and other forms of unpredictable AI algorithms into the targeting functions of loitering munitions because of how this may fundamentally alter the predictability, explainability, and accountability of specific targeting decisions and their outcomes. Establish controls over the types of environments in which sensor-based weapons like loitering munitions that can use automated, autonomous, and AI technologies to identify, select, track, and apply force to targets can operate. Loitering munitions functioning as AWS should not be used in populated areas. Prohibit the use of certain target profiles for sensor-based weapons which use automated, autonomous, and AI technologies in targeting functions. This should include prohibiting the design, testing, and use of autonomy in weapon systems, including loitering munitions, to “target human beings” as well as limiting the use of such weapons “to objects that are military objectives by nature” (ICRC, 2021: 2.). Be more forthcoming in releasing technical details relating to the quality of human control exercised in operating loitering munitions in specific targeting decisions. This should include the sharing, where appropriate, of details regarding the level and character of the training that human operators of loitering munitions receive.  Funding: Research for the report was supported by funding from the European Union’s Horizon 2020 research and innovation programme (under grant agreement No. 852123, AutoNorms project) and from the Joseph Rowntree Charitable Trust. Tom Watts’ revisions to this report were supported by the funding provided by his Leverhulme Trust Early Career Research Fellowship (ECF-2022-135). We also collaborated with Article 36 in writing the report. About the authors: Dr Ingvild Bode is Associate Professor at the Center for War Studies, University of Southern Denmark and a Senior Research Fellow at the Conflict Analysis Research Centre, University of Kent. She is the Principal Investigator of the European Research Council-funded “AutoNorms” project, examining how autonomous weapons systems may change international use of force norms. Her research focuses on understanding processes of normative change, especially through studying practices in relation to the use of force, military Artificial Intelligence, and associated governance demands. More information about Ingvild’s her research is available here. Dr Tom F.A. Watts is a Leverhulme Trust Early Career Researcher based at the Department of Politics, International Relations, and Philosophy at Royal Holloway, University of London. His current project titled “Great Power Competition and Remote Warfare: Change or Continuity in Practice?” (ECF-2022-135) examines the relationship between the use of the strategic practices associated with the concept of remote warfare, the dynamics of change and continuity in contemporary American foreign policy, and autonomy in weapons systems. More information about Tom’s research is available here

    The Knowledge Application and Utilization Framework Applied to Defense COTS: A Research Synthesis for Outsourced Innovation

    Get PDF
    Purpose -- Militaries of developing nations face increasing budget pressures, high operations tempo, a blitzing pace of technology, and adversaries that often meet or beat government capabilities using commercial off-the-shelf (COTS) technologies. The adoption of COTS products into defense acquisitions has been offered to help meet these challenges by essentially outsourcing new product development and innovation. This research summarizes extant research to develop a framework for managing the innovative and knowledge flows. Design/Methodology/Approach – A literature review of 62 sources was conducted with the objectives of identifying antecedents (barriers and facilitators) and consequences of COTS adoption. Findings – The DoD COTS literature predominantly consists of industry case studies, and there’s a strong need for further academically rigorous study. Extant rigorous research implicates the importance of the role of knowledge management to government innovative thinking that relies heavily on commercial suppliers. Research Limitations/Implications – Extant academically rigorous studies tend to depend on measures derived from work in information systems research, relying on user satisfaction as the outcome. Our findings indicate that user satisfaction has no relationship to COTS success; technically complex governmental purchases may be too distant from users or may have socio-economic goals that supersede user satisfaction. The knowledge acquisition and utilization framework worked well to explain the innovative process in COTS. Practical Implications – Where past research in the commercial context found technological knowledge to outweigh market knowledge in terms of importance, our research found the opposite. Managers either in government or marketing to government should be aware of the importance of market knowledge for defense COTS innovation, especially for commercial companies that work as system integrators. Originality/Value – From the literature emerged a framework of COTS product usage and a scale to measure COTS product appropriateness that should help to guide COTS product adoption decisions and to help manage COTS product implementations ex post
    • 

    corecore