23,687 research outputs found

    AFTI/F-16 digital flight control system experience

    Get PDF
    The Advanced Flighter Technology Integration (AFTI) F-16 program is investigating the integration of emerging technologies into an advanced fighter aircraft. The three major technologies involved are the triplex digital flight control system; decoupled aircraft flight control; and integration of avionics, pilot displays, and flight control. In addition to investigating improvements in fighter performance, the AFTI/F-16 program provides a look at generic problems facing highly integrated, flight-crucial digital controls. An overview of the AFTI/F-16 systems is followed by a summary of flight test experience and recommendations

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph

    Full text link
    Police SWAT teams and Military Special Forces face mounting pressure and challenges from adversaries that can only be resolved by way of ever more sophisticated inputs into tactical operations. Lethal Autonomy provides constrained military/security forces with a viable option, but only if implementation has got proper empirically supported foundations. Autonomous weapon systems can be designed and developed to conduct ground, air and naval operations. This monograph offers some insights into the challenges of developing legal, reliable and ethical forms of autonomous weapons, that address the gap between Police or Law Enforcement and Military operations that is growing exponentially small. National adversaries are today in many instances hybrid threats, that manifest criminal and military traits, these often require deployment of hybrid-capability autonomous weapons imbued with the capability to taken on both Military and/or Security objectives. The Westgate Terrorist Attack of 21st September 2013 in the Westlands suburb of Nairobi, Kenya is a very clear manifestation of the hybrid combat scenario that required military response and police investigations against a fighting cell of the Somalia based globally networked Al Shabaab terrorist group.Comment: 52 pages, 6 Figures, over 40 references, reviewed by a reade

    "Out of the loop": autonomous weapon systems and the law of armed conflict

    Get PDF
    The introduction of autonomous weapon systems into the “battlespace” will profoundly influence the nature of future warfare. This reality has begun to draw the attention of the international legal community, with increasing calls for an outright ban on the use of autonomous weapons systems in armed conflict. This Article is intended to help infuse granularity and precision into the legal debates surrounding such weapon systems and their future uses. It suggests that whereas some conceivable autonomous weapon systems might be prohibited as a matter of law, the use of others will be unlawful only when employed in a manner that runs contrary to the law of armed conflict’s prescriptive norms governing the “conduct of hostilities.” This Article concludes that an outright ban of autonomous weapon systems is insupportable as a matter of law, policy, and operational good sense. Indeed, proponents of a ban underestimate the extent to which the law of armed conflict, including its customary law aspect, will control autonomous weapon system operations. Some autonomous weapon systems that might be developed would already be unlawful per se under existing customary law, irrespective of any treaty ban. The use of certain others would be severely limited by that law. Furthermore, an outright ban is premature since no such weapons have even left the drawing board. Critics typically either fail to take account of likely developments in autonomous weapon systems technology or base their analysis on unfounded assumptions about the nature of the systems. From a national security perspective, passing on the opportunity to develop these systems before they are fully understood would be irresponsible. Perhaps even more troubling is the prospect that banning autonomous weapon systems altogether based on speculation as to their future form could forfeit their potential use in a manner that would minimize harm to civilians and civilian objects when compared to non-autonomous weapon systems

    Autonomous weapon systems and international humanitarian law: a reply to the critics

    Get PDF
    In November 2012, Human Rights Watch, in collaboration with the International Human Rights Clinic at Harvard Law School, released Losing Humanity: The Case against Killer Robots.[2] Human Rights Watch is among the most sophisticated of human rights organizations working in the field of international humanitarian law. Its reports are deservedly influential and have often helped shape application of the law during armed conflict. Although this author and the organization have occasionally crossed swords,[3] we generally find common ground on key issues. This time, we have not. “Robots” is a colloquial rendering for autonomous weapon systems. Human Rights Watch’s position on them is forceful and unambiguous: “[F]ully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-safeguards for civilians.”[4] Therefore, they “should be banned and . . . governments should urgently pursue that end.”[5] In fact, if the systems cannot meet the legal standards cited by Human Rights Watch, then they are already unlawful as such under customary international law irrespective of any policy or treaty law ban on them.[6] Unfortunately, Losing Humanity obfuscates the on-going legal debate over autonomous weapon systems. A principal flaw in the analysis is a blurring of the distinction between international humanitarian law’s prohibitions on weapons per se and those on the unlawful use of otherwise lawful weapons.[7] Only the former render a weapon illegal as such. To illustrate, a rifle is lawful, but may be used unlawfully, as in shooting a civilian. By contrast, under customary international law, biological weapons are unlawful per se; this is so even if they are used against lawful targets, such as the enemy’s armed forces. The practice of inappropriately conflating these two different strands of international humanitarian law has plagued debates over other weapon systems, most notably unmanned combat aerial systems such as the armed Predator. In addition, some of the report’s legal analysis fails to take account of likely developments in autonomous weapon systems technology or is based on unfounded assumptions as to the nature of the systems. Simply put, much of Losing Humanity is either counter-factual or counter-normative. This Article is designed to infuse granularity and precision into the legal debates surrounding such weapon systems and their use in the future “battlespace.” It suggests that whereas some conceivable autonomous weapon systems might be prohibited as a matter of law, the use of others will be unlawful only when employed in a manner that runs contrary to international humanitarian law’s prescriptive norms. This Article concludes that Losing Humanity’s recommendation to ban the systems is insupportable as a matter of law, policy, and operational good sense. Human Rights Watch’s analysis sells international humanitarian law short by failing to appreciate how the law tackles the very issues about which the organization expresses concern. Perhaps the most glaring weakness in the recommendation is the extent to which it is premature. No such weapons have even left the drawing board. To ban autonomous weapon systems altogether based on speculation as to their future form is to forfeit any potential uses of them that might minimize harm to civilians and civilian objects when compared to other systems in military arsenals

    The Knowledge Application and Utilization Framework Applied to Defense COTS: A Research Synthesis for Outsourced Innovation

    Get PDF
    Purpose -- Militaries of developing nations face increasing budget pressures, high operations tempo, a blitzing pace of technology, and adversaries that often meet or beat government capabilities using commercial off-the-shelf (COTS) technologies. The adoption of COTS products into defense acquisitions has been offered to help meet these challenges by essentially outsourcing new product development and innovation. This research summarizes extant research to develop a framework for managing the innovative and knowledge flows. Design/Methodology/Approach – A literature review of 62 sources was conducted with the objectives of identifying antecedents (barriers and facilitators) and consequences of COTS adoption. Findings – The DoD COTS literature predominantly consists of industry case studies, and there’s a strong need for further academically rigorous study. Extant rigorous research implicates the importance of the role of knowledge management to government innovative thinking that relies heavily on commercial suppliers. Research Limitations/Implications – Extant academically rigorous studies tend to depend on measures derived from work in information systems research, relying on user satisfaction as the outcome. Our findings indicate that user satisfaction has no relationship to COTS success; technically complex governmental purchases may be too distant from users or may have socio-economic goals that supersede user satisfaction. The knowledge acquisition and utilization framework worked well to explain the innovative process in COTS. Practical Implications – Where past research in the commercial context found technological knowledge to outweigh market knowledge in terms of importance, our research found the opposite. Managers either in government or marketing to government should be aware of the importance of market knowledge for defense COTS innovation, especially for commercial companies that work as system integrators. Originality/Value – From the literature emerged a framework of COTS product usage and a scale to measure COTS product appropriateness that should help to guide COTS product adoption decisions and to help manage COTS product implementations ex post

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Requirements for migration of NSSD code systems from LTSS to NLTSS

    Get PDF
    The purpose of this document is to address the requirements necessary for a successful conversion of the Nuclear Design (ND) application code systems to the NLTSS environment. The ND application code system community can be characterized as large-scale scientific computation carried out on supercomputers. NLTSS is a distributed operating system being developed at LLNL to replace the LTSS system currently in use. The implications of change are examined including a description of the computational environment and users in ND. The discussion then turns to requirements, first in a general way, followed by specific requirements, including a proposal for managing the transition
    • …
    corecore