7 research outputs found

    "Why do so?" -- A Practical Perspective on Machine Learning Security

    Get PDF
    Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and concern and evaluate statistical hypotheses on factors influencing threat perception and exposure. Our results shed light on real-world attacks on deployed machine learning. On the organizational level, while we find no predictors for threat exposure in our sample, the amount of implement defenses depends on exposure to threats or expected likelihood to become a target. We also provide a detailed analysis of practitioners' replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage, and bias introduction into models. Finally, we find that on the individual level, prior knowledge about machine learning security influences threat perception. Our work paves the way for more research about adversarial machine learning in practice, but yields also insights for regulation and auditing.Comment: under submission - 18 pages, 3 tables and 4 figures. Long version of the paper accepted at: New Frontiers of Adversarial Machine Learning@ICM

    Towards more Practical Threat Models in Artificial Intelligence Security

    Full text link
    Recent works have identified a gap between research and practice in artificial intelligence security: threats studied in academia do not always reflect the practical use and security risks of AI. For example, while models are often studied in isolation, they form part of larger ML pipelines in practice. Recent works also brought forward that adversarial manipulations introduced by academic attacks are impractical. We take a first step towards describing the full extent of this disparity. To this end, we revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice via a survey with 271 industrial practitioners. On the one hand, we find that all existing threat models are indeed applicable. On the other hand, there are significant mismatches: research is often too generous with the attacker, assuming access to information not frequently available in real-world settings. Our paper is thus a call for action to study more practical threat models in artificial intelligence security.Comment: 18 pages, 4 figures, 8 tables, accepted to Usenix Security, incorporated external feedbac

    Industrial practitioners' mental models of adversarial machine learning

    No full text
    Although machine learning is widely used in practice, little is known about practitioners' understanding of potential security challenges. In this work, we close this substantial gap and contribute a qualitative study focusing on developers' mental models of the machine learning pipeline and potentially vulnerable components. Similar studies have helped in other security fields to discover root causes or improve risk communication. Our study reveals two facets of practitioners' mental models of machine learning security. Firstly, practitioners often confuse machine learning security with threats and defences that are not directly related to machine learning. Secondly, in contrast to most academic research, our participants perceive security of machine learning as not solely related to individual models, but rather in the context of entire workflows that consist of multiple components. Jointly with our additional findings, these two facets provide a foundation to substantiate mental models for machine learning security and have implications for the integration of adversarial machine learning into corporate workflows, decreasing practitioners' reported uncertainty, and appropriate regulatory frameworks for machine learning security

    When Your AI Becomes a Target: AI Security Incidents and Best Practices

    No full text
    In contrast to vast academic efforts to study AI security, few real-world reports of AI security incidents exist. Released incidents prevent a thorough investigation of the attackers' motives, as crucial information about the company and AI application is missing. As a consequence, it often remains unknown how to avoid incidents. We tackle this gap and combine previous reports with freshly collected incidents to a small database of 32 AI security incidents. We analyze the attackers' target and goal, influencing factors, causes, and mitigations. Many incidents stem from non-compliance with best practices in security and privacy-enhancing technologies. In the case of direct AI attacks, access control may provide some mitigation, but there is little scientific work on best practices. Our paper is thus a call for action to address these gaps

    Quantitative Long-Term Monitoring of the Circulating Gases in the KATRIN Experiment Using Raman Spectroscopy.

    Get PDF
    The Karlsruhe Tritium Neutrino (KATRIN) experiment aims at measuring the effective electron neutrino mass with a sensitivity of 0.2 eV/c2, i.e., improving on previous measurements by an order of magnitude. Neutrino mass data taking with KATRIN commenced in early 2019, and after only a few weeks of data recording, analysis of these data showed the success of KATRIN, improving on the known neutrino mass limit by a factor of about two. This success very much could be ascribed to the fact that most of the system components met, or even surpassed, the required specifications during long-term operation. Here, we report on the performance of the laser Raman (LARA) monitoring system which provides continuous high-precision information on the gas composition injected into the experiment's windowless gaseous tritium source (WGTS), specifically on its isotopic purity of tritium-one of the key parameters required in the derivation of the electron neutrino mass. The concentrations cx for all six hydrogen isotopologues were monitored simultaneously, with a measurement precision for individual components of the order 10-3 or better throughout the complete KATRIN data taking campaigns to date. From these, the tritium purity, εT, is derived with precision of <10-3 and trueness of <3 × 10-3, being within and surpassing the actual requirements for KATRIN, respectively

    Quantitative Long-Term Monitoring of the Circulating Gases in the KATRIN Experiment Using Raman Spectroscopy

    No full text
    corecore