626 research outputs found

    Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

    Full text link
    Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability.We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking attack with two constraints: 1) the classification of the other images should be unchanged and 2) the parameter modifications should be minimized. Specifically, the first constraint requires us not only to inject designated faults (misclassifications), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. The second constraint requires us to minimize the parameter modifications (using L0 norm to measure the number of modifications and L2 norm to measure the magnitude of modifications). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.Comment: Accepted by the 56th Design Automation Conference (DAC 2019

    ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches

    Get PDF
    Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leading to suboptimal robustness evaluations. To overcome these issues, we propose ImageNet-Patch, a dataset to benchmark machine-learning models against adversarial patches. The dataset is built by first optimizing a set of adversarial patches against an ensemble of models, using a state-of-the-art attack that creates transferable patches. The corresponding patches are then randomly rotated and translated, and finally applied to the ImageNet data. We use ImageNet-Patch to benchmark the robustness of 127 models against patch attacks, and also validate the effectiveness of the given patches in the physical domain (i.e., by printing and applying them to real-world objects). We conclude by discussing how our dataset could be used as a benchmark for robustness, and how our methodology can be generalized to other domains. We open source our dataset and evaluation code at https://github.com/pralab/ImageNet-Patch

    A unified approach to Darboux transformations

    Full text link
    We analyze a certain class of integral equations related to Marchenko equations and Gel'fand-Levitan equations associated with various systems of ordinary differential operators. When the integral operator is perturbed by a finite-rank perturbation, we explicitly evaluate the change in the solution. We show how this result provides a unified approach to Darboux transformations associated with various systems of ordinary differential operators. We illustrate our theory by deriving the Darboux transformation for the Zakharov-Shabat system and show how the potential and wave function change when a discrete eigenvalue is added to the spectrum.Comment: final version that will appear in Inverse Problem

    Exact Solutions to the Sine-Gordon Equation

    Full text link
    A systematic method is presented to provide various equivalent solution formulas for exact solutions to the sine-Gordon equation. Such solutions are analytic in the spatial variable xx and the temporal variable t,t, and they are exponentially asymptotic to integer multiples of 2π2\pi as x→±∞.x\to\pm\infty. The solution formulas are expressed explicitly in terms of a real triplet of constant matrices. The method presented is generalizable to other integrable evolution equations where the inverse scattering transform is applied via the use of a Marchenko integral equation. By expressing the kernel of that Marchenko equation as a matrix exponential in terms of the matrix triplet and by exploiting the separability of that kernel, an exact solution formula to the Marchenko equation is derived, yielding various equivalent exact solution formulas for the sine-Gordon equation.Comment: 43 page

    Flow-based detection and proxy-based evasion of encrypted malware C2 traffic

    Full text link
    State of the art deep learning techniques are known to be vulnerable to evasion attacks where an adversarial sample is generated from a malign sample and misclassified as benign. Detection of encrypted malware command and control traffic based on TCP/IP flow features can be framed as a learning task and is thus vulnerable to evasion attacks. However, unlike e.g. in image processing where generated adversarial samples can be directly mapped to images, going from flow features to actual TCP/IP packets requires crafting the sequence of packets, with no established approach for such crafting and a limitation on the set of modifiable features that such crafting allows. In this paper we discuss learning and evasion consequences of the gap between generated and crafted adversarial samples. We exemplify with a deep neural network detector trained on a public C2 traffic dataset, white-box adversarial learning, and a proxy-based approach for crafting longer flows. Our results show 1) the high evasion rate obtained by using generated adversarial samples on the detector can be significantly reduced when using crafted adversarial samples; 2) robustness against adversarial samples by model hardening varies according to the crafting approach and corresponding set of modifiable features that the attack allows for; 3) incrementally training hardened models with adversarial samples can produce a level playing field where no detector is best against all attacks and no attack is best against all detectors, in a given set of attacks and detectors. To the best of our knowledge this is the first time that level playing field feature set- and iteration-hardening are analyzed in encrypted C2 malware traffic detection.Comment: 9 pages, 6 figure
    • 

    corecore