9 research outputs found

    Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities

    Full text link
    Artificial intelligence (AI) and machine learning (ML) have become increasingly vital in the development of novel defense and intelligence capabilities across all domains of warfare. An adversarial AI (A2I) and adversarial ML (AML) attack seeks to deceive and manipulate AI/ML models. It is imperative that AI/ML models can defend against these attacks. A2I/AML defenses will help provide the necessary assurance of these advanced capabilities that use AI/ML models. The A2I Working Group (A2IWG) seeks to advance the research and development of assured AI/ML capabilities via new A2I/AML defenses by fostering a collaborative environment across the U.S. Department of Defense and U.S. Intelligence Community. The A2IWG aims to identify specific challenges that it can help solve or address more directly, with initial focus on three topics: AI Trusted Robustness, AI System Security, and AI/ML Architecture Vulnerabilities.Comment: Presented at AAAI FSS-20: Artificial Intelligence in Government and Public Sector, Washington, DC, US

    Exploiting Alpha Transparency In Language And Vision-Based AI Systems

    Full text link
    This investigation reveals a novel exploit derived from PNG image file formats, specifically their alpha transparency layer, and its potential to fool multiple AI vision systems. Our method uses this alpha layer as a clandestine channel invisible to human observers but fully actionable by AI image processors. The scope tested for the vulnerability spans representative vision systems from Apple, Microsoft, Google, Salesforce, Nvidia, and Facebook, highlighting the attack's potential breadth. This vulnerability challenges the security protocols of existing and fielded vision systems, from medical imaging to autonomous driving technologies. Our experiments demonstrate that the affected systems, which rely on convolutional neural networks or the latest multimodal language models, cannot quickly mitigate these vulnerabilities through simple patches or updates. Instead, they require retraining and architectural changes, indicating a persistent hole in multimodal technologies without some future adversarial hardening against such vision-language exploits

    Data Poisoning: A New Threat to Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) adoption is rapidly being deployed in a number of fields, from banking and finance to healthcare, robotics, transportation, military, e-commerce and social networks. Grand View Research estimates that the global AI market was worth 93.5 billion in 2021 and that it will increase at a compound annual growth rate (CAGR) of 38.1% from 2022 to 2030. According to a 2020 MIT Sloan Management survey, 87% of multinational corporations believe that AI technology will provide a competitive edge. Artificial Intelligence relies heavily on datasets to train its models. The more data, the better it learns and predicts. However, the downside to AI is data, data that can be manipulated or poisoned. A new type of threat is emerging, and that threat is data poisoning. Data Poisoning is challenging and time consuming to spot and when it is discovered, the damage is already extensive. Unlike traditional attack that is caused by errors found in code, this new threat is attacking the AI training data used in its algorithm. Data is now being weaponized. It requires minimal effort but can cause substantial damages. It only takes 1-3% of data to be poisoned to severely diminish an AI’s ability to produce accurate predictions

    VenoMave: Targeted Poisoning Against Speech Recognition

    Full text link
    The wide adoption of Automatic Speech Recognition (ASR) remarkably enhanced human-machine interaction. Prior research has demonstrated that modern ASR systems are susceptible to adversarial examples, i.e., malicious audio inputs that lead to misclassification by the victim's model at run time. The research question of whether ASR systems are also vulnerable to data-poisoning attacks is still unanswered. In such an attack, a manipulation happens during the training phase: an adversary injects malicious inputs into the training set to compromise the neural network's integrity and performance. Prior work in the image domain demonstrated several types of data-poisoning attacks, but these results cannot directly be applied to the audio domain. In this paper, we present the first data-poisoning attack against ASR, called VenoMave. We evaluate our attack on an ASR system that detects sequences of digits. When poisoning only 0.17% of the dataset on average, we achieve an attack success rate of 86.67%. To demonstrate the practical feasibility of our attack, we also evaluate if the target audio waveform can be played over the air via simulated room transmissions. In this more realistic threat model, VenoMave still maintains a success rate up to 73.33%. We further extend our evaluation to the Speech Commands corpus and demonstrate the scalability of VenoMave to a larger vocabulary. During a transcription test with human listeners, we verify that more than 85% of the original text of poisons can be correctly transcribed. We conclude that data-poisoning attacks against ASR represent a real threat, and we are able to perform poisoning for arbitrary target input files while the crafted poison samples remain inconspicuous
    corecore