6,077 research outputs found

    Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph

    Full text link
    Police SWAT teams and Military Special Forces face mounting pressure and challenges from adversaries that can only be resolved by way of ever more sophisticated inputs into tactical operations. Lethal Autonomy provides constrained military/security forces with a viable option, but only if implementation has got proper empirically supported foundations. Autonomous weapon systems can be designed and developed to conduct ground, air and naval operations. This monograph offers some insights into the challenges of developing legal, reliable and ethical forms of autonomous weapons, that address the gap between Police or Law Enforcement and Military operations that is growing exponentially small. National adversaries are today in many instances hybrid threats, that manifest criminal and military traits, these often require deployment of hybrid-capability autonomous weapons imbued with the capability to taken on both Military and/or Security objectives. The Westgate Terrorist Attack of 21st September 2013 in the Westlands suburb of Nairobi, Kenya is a very clear manifestation of the hybrid combat scenario that required military response and police investigations against a fighting cell of the Somalia based globally networked Al Shabaab terrorist group.Comment: 52 pages, 6 Figures, over 40 references, reviewed by a reade

    "Out of the loop": autonomous weapon systems and the law of armed conflict

    Get PDF
    The introduction of autonomous weapon systems into the “battlespace” will profoundly influence the nature of future warfare. This reality has begun to draw the attention of the international legal community, with increasing calls for an outright ban on the use of autonomous weapons systems in armed conflict. This Article is intended to help infuse granularity and precision into the legal debates surrounding such weapon systems and their future uses. It suggests that whereas some conceivable autonomous weapon systems might be prohibited as a matter of law, the use of others will be unlawful only when employed in a manner that runs contrary to the law of armed conflict’s prescriptive norms governing the “conduct of hostilities.” This Article concludes that an outright ban of autonomous weapon systems is insupportable as a matter of law, policy, and operational good sense. Indeed, proponents of a ban underestimate the extent to which the law of armed conflict, including its customary law aspect, will control autonomous weapon system operations. Some autonomous weapon systems that might be developed would already be unlawful per se under existing customary law, irrespective of any treaty ban. The use of certain others would be severely limited by that law. Furthermore, an outright ban is premature since no such weapons have even left the drawing board. Critics typically either fail to take account of likely developments in autonomous weapon systems technology or base their analysis on unfounded assumptions about the nature of the systems. From a national security perspective, passing on the opportunity to develop these systems before they are fully understood would be irresponsible. Perhaps even more troubling is the prospect that banning autonomous weapon systems altogether based on speculation as to their future form could forfeit their potential use in a manner that would minimize harm to civilians and civilian objects when compared to non-autonomous weapon systems

    Autonomous weapon systems and international humanitarian law: a reply to the critics

    Get PDF
    In November 2012, Human Rights Watch, in collaboration with the International Human Rights Clinic at Harvard Law School, released Losing Humanity: The Case against Killer Robots.[2] Human Rights Watch is among the most sophisticated of human rights organizations working in the field of international humanitarian law. Its reports are deservedly influential and have often helped shape application of the law during armed conflict. Although this author and the organization have occasionally crossed swords,[3] we generally find common ground on key issues. This time, we have not. “Robots” is a colloquial rendering for autonomous weapon systems. Human Rights Watch’s position on them is forceful and unambiguous: “[F]ully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-safeguards for civilians.”[4] Therefore, they “should be banned and . . . governments should urgently pursue that end.”[5] In fact, if the systems cannot meet the legal standards cited by Human Rights Watch, then they are already unlawful as such under customary international law irrespective of any policy or treaty law ban on them.[6] Unfortunately, Losing Humanity obfuscates the on-going legal debate over autonomous weapon systems. A principal flaw in the analysis is a blurring of the distinction between international humanitarian law’s prohibitions on weapons per se and those on the unlawful use of otherwise lawful weapons.[7] Only the former render a weapon illegal as such. To illustrate, a rifle is lawful, but may be used unlawfully, as in shooting a civilian. By contrast, under customary international law, biological weapons are unlawful per se; this is so even if they are used against lawful targets, such as the enemy’s armed forces. The practice of inappropriately conflating these two different strands of international humanitarian law has plagued debates over other weapon systems, most notably unmanned combat aerial systems such as the armed Predator. In addition, some of the report’s legal analysis fails to take account of likely developments in autonomous weapon systems technology or is based on unfounded assumptions as to the nature of the systems. Simply put, much of Losing Humanity is either counter-factual or counter-normative. This Article is designed to infuse granularity and precision into the legal debates surrounding such weapon systems and their use in the future “battlespace.” It suggests that whereas some conceivable autonomous weapon systems might be prohibited as a matter of law, the use of others will be unlawful only when employed in a manner that runs contrary to international humanitarian law’s prescriptive norms. This Article concludes that Losing Humanity’s recommendation to ban the systems is insupportable as a matter of law, policy, and operational good sense. Human Rights Watch’s analysis sells international humanitarian law short by failing to appreciate how the law tackles the very issues about which the organization expresses concern. Perhaps the most glaring weakness in the recommendation is the extent to which it is premature. No such weapons have even left the drawing board. To ban autonomous weapon systems altogether based on speculation as to their future form is to forfeit any potential uses of them that might minimize harm to civilians and civilian objects when compared to other systems in military arsenals

    Cyber-SHIP: Developing Next Generation Maritime Cyber Research Capabilities

    Get PDF
    As a growing global threat, cyber-attacks can cost millions of dollars or endanger national stability and human lives. While relatively well understood in most sectors, it is becoming clear that, although the maritime sector is becoming more digitally advanced (e.g., autonomy), it is not well protected against cyber or cyber-physical attacks and accidents. To help improve sector-wide safety and resiliency, the University of Plymouth (UoP) is creating a specialised maritime-cyber lab, which combines maritime technology and traditional cyber-security labs. This is in response to the lack of research and mitigation capabilities and will create a new resource capability for academia, government, and industry research into maritime cybersecurity risks and threats. These lab capabilities will also enhance existing maritime-cyber capabilities across the world, including risk assessment frameworks, cybersecurity ranges/labs, ship simulators, mariner training programmes, autonomous ships, etc. The goal of this paper is to explain the need for next generation maritime-cyber research capabilities, and demonstrate how something like the proposed Cyber-SHIP Lab (Hardware, Software, Information and Protections) will help industry, government, and academia understand and mitigate cyber threats in the maritime sector. The authors believe a next generation cyber-secure lab should host a range of real, non-simulated, maritime systems. With multiple configurations to mirror existing bridge system set-ups, the technology can be studied for individual system weakness as well as the system-of-systems vulnerabilities. Such as lab would support a range of research that cannot be achieved with simulators alone and help support the next generation of cyber-secure marine systems

    Keeping the human element to secure autonomous shipping operations

    Get PDF
    Autonomous shipping operations are becoming economically and technically feasible, but this development also requires new human roles and responsibilities onshore for managing cyber events. The goal of this paper is to present a methodology for describing autonomous shipping operations and risks caused by potential cyber-attacks, focusing on critical situations to the interplay between the automation and human operators. We have applied our methodology on a case study for planned autonomous operations in European waterways. Our results show that the reliance on new technologies such as sensors, computer vision and AI reasoning onboard the autonomous ships or cranes opens to new types of attacks that the industry has little experience with as of now. Unmanned systems should therefore be designed with assurance methods that can bring the human into the loop, providing situational awareness and control. At the same time, human resource exhaustion is a potential attack goal against remote operations. We could see from our threat likelihood estimation that attacks related to deny- and injure-motivations have the highest values in all mission phase patterns. This is in accordance with the general attack trends within the maritime domain and many other sectors, where financially motivated attackers will try to demand a ransom to stop business disruption.publishedVersio

    Buques autónomos y ciberseguridad

    Get PDF
    Currently, as a result of new communications technologies, autonomous ships are even closer to our seas than we could think. But, besides un-doubted advantages, it gives rise to uncertainties and challenges in several aspects, which include those related to the fields of cybersecurity and legislation, in relation to international regulations and national laws. The aspects of autonomous shipping are included in the information regulations of Bureau Veritas, and additional specific tags have been created to collect the cybersecurity/cyberprotection aspects of such ships. The objective of this article is to present the current status and the foreseeable evolution of the regulations on autonomous shipping from the point of view of a Classification Society, as well as the current evolution of the methodologies concerning cybersecurity.Actualmente, y gracias a las nuevas tecnologías de las comunicaciones, los buques autónomos están aún más cerca de nuestros mares de lo que pudiéramos pensar. Pero, al igual que ventajas indudables, generan incertidumbres y retos en varios aspectos, entre los que destacan los relacionados con los campos de la ciberseguridad y legislativos, en lo referente a normas internacionales y legislación nacional. Los aspectos de los buques autónomos están recogidos en el reglamento informativo de Bureau Veritas, así como se han creado notaciones adicionales específicas para recoger los aspectos de ciberseguridad/ciberprotección de dichos buques. El objetivo de este artículo es dar a conocer el estado actual y la evolución previsible de las regulaciones sobre buques autónomos desde el punto de vista de una Sociedad de Clasificación, así como la actual evolución de las metodologías concernientes a la ciberseguridad

    Paving the way toward autonomous shipping development for European Waters – The AUTOSHIP project

    Get PDF
    New developments in maritime industry include the design and operation of autonomous ships. The AUTOSHIP project is one initiative promoting the use of autonomous ships in European waters focusing on two specific use cases, a Short Sea Shipping (SSS) cargo vessel and an Inland Waterways (IWW) barge. The AUTOSHIP objectives include thorough regulatory, societal, financial, safety and security analyses for the two investigated use cases as well as the development of a novel framework and methods for the design of autonomous vessels. This objective is achieved with the support of a number of activities, including supply chain, regulatory, risk and gaps analyses. Some results and findings from these activities are presented in this paper. The results demonstrate that the supply chain analysis is important to understand the complex relationships between different partners and phases for the effective design of maritime autonomous systems. Furthermore, a number of regulatory gaps needs to be addressed for the wider adoption of the AUTOSHIP use cases. There is a number of essential hazards associated with each of the two use cases; measures to mitigate these hazards are presented
    corecore