3,823 research outputs found

    AI Patents: A Data Driven Approach

    Get PDF
    While artificial intelligence (AI) research brings challenges, the resulting systems are no accident. In fact, academics, researchers, and industry professionals have been developing AI systems since the early 1900s. AI is a field uniquely positioned at the intersection of several scientific disciplines including computer science, applied mathematics, and neuroscience. The AI design process is meticulous, deliberate, and time-consuming – involving intensive mathematical theory, data processing, and computer programming. All the while, AI’s economic value is accelerating. As such, protecting the intellectual property (IP) springing from this work is a keystone for technology firms acting in competitive markets

    EVALUATING ARTIFICIAL INTELLIGENCE METHODS FOR USE IN KILL CHAIN FUNCTIONS

    Get PDF
    Current naval operations require sailors to make time-critical and high-stakes decisions based on uncertain situational knowledge in dynamic operational environments. Recent tragic events have resulted in unnecessary casualties, and they represent the decision complexity involved in naval operations and specifically highlight challenges within the OODA loop (Observe, Orient, Decide, and Assess). Kill chain decisions involving the use of weapon systems are a particularly stressing category within the OODA loop—with unexpected threats that are difficult to identify with certainty, shortened decision reaction times, and lethal consequences. An effective kill chain requires the proper setup and employment of shipboard sensors; the identification and classification of unknown contacts; the analysis of contact intentions based on kinematics and intelligence; an awareness of the environment; and decision analysis and resource selection. This project explored the use of automation and artificial intelligence (AI) to improve naval kill chain decisions. The team studied naval kill chain functions and developed specific evaluation criteria for each function for determining the efficacy of specific AI methods. The team identified and studied AI methods and applied the evaluation criteria to map specific AI methods to specific kill chain functions.Civilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCaptain, United States Marine CorpsCivilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited

    Information Warfare and the Future of Conflict

    Get PDF
    The goal of the Future of Information Warfare Threatcasting Project was to explore the coming decade’s emerging technological and cultural trends and envision plausible future threats from multiple perspectives. The project sought to illuminate emerging areas of strategic threat and potential investment, particularly relating to the proliferation of emerging intelligences, technologies, and systems that could considerably change the nature of the battlefield by 2028 and beyond. In three Threatcasting Workshops a select group of practitioners from across multiple domains (security, academia, media, and technology) worked to envision these futures and explore what actions should be taken now to counter future IW threats. The final goal was to operationalize the finding for the Army and to determine what actions could be taken to disrupt, mitigate, and recover from these future threats.https://digitalcommons.usmalibrary.org/aci_books/1039/thumbnail.jp

    EVALUATING ARTIFICIAL INTELLIGENCE FOR OPERATIONS IN THE INFORMATION ENVIRONMENT

    Get PDF
    Recent advances in artificial intelligence (AI) portend a future of accelerated information cycles and intensified technology diffusion. As AI applications become increasingly prevalent and complex, Special Operations Forces (SOF) face the challenge of discerning which tools most effectively address operational needs and generate an advantage in the information environment. Yet, SOF currently lack an end user–focused evaluation framework that could assist information practitioners in determining the operational value of an AI tool. This thesis proposes a practitioner’s evaluation framework (PEF) to address the question of how SOF should evaluate AI technologies to conduct operations in the information environment (OIE). The PEF evaluates AI technologies through the perspective of the information practitioner who is familiar with the mission, the operational requirements, and OIE processes but has limited to no technical knowledge of AI. The PEF consists of a four-phased approach—prepare, design, conduct, recommend—that assesses nine evaluation domains: mission/task alignment; data; system/model performance; user experience; sustainability; scalability; affordability; ethical, legal, and policy considerations; and vendor assessment. By evaluating AI through a more structured, methodical approach, the PEF enables SOF to identify, assess, and prioritize AI-enabled tools for OIE.Outstanding ThesisMajor, United States ArmyApproved for public release. Distribution is unlimited

    Cyber-Human Systems, Space Technologies, and Threats

    Get PDF
    CYBER-HUMAN SYSTEMS, SPACE TECHNOLOGIES, AND THREATS is our eighth textbook in a series covering the world of UASs / CUAS/ UUVs / SPACE. Other textbooks in our series are Space Systems Emerging Technologies and Operations; Drone Delivery of CBNRECy – DEW Weapons: Emerging Threats of Mini-Weapons of Mass Destruction and Disruption (WMDD); Disruptive Technologies with applications in Airline, Marine, Defense Industries; Unmanned Vehicle Systems & Operations On Air, Sea, Land; Counter Unmanned Aircraft Systems Technologies and Operations; Unmanned Aircraft Systems in the Cyber Domain: Protecting USA’s Advanced Air Assets, 2nd edition; and Unmanned Aircraft Systems (UAS) in the Cyber Domain Protecting USA’s Advanced Air Assets, 1st edition. Our previous seven titles have received considerable global recognition in the field. (Nichols & Carter, 2022) (Nichols, et al., 2021) (Nichols R. K., et al., 2020) (Nichols R. , et al., 2020) (Nichols R. , et al., 2019) (Nichols R. K., 2018) (Nichols R. K., et al., 2022)https://newprairiepress.org/ebooks/1052/thumbnail.jp
    • …
    corecore