180 research outputs found

    Block Switching: A Stochastic Approach for Deep Learning Security

    Full text link
    Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy produce arbitrary incorrect predictions, while maintain imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP). Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent and (iii) BS is compatible with other defenses and can be used jointly with others.Comment: Accepted by AdvML19: Workshop on Adversarial Learning Methods for Machine Learning and Data Mining at KDD, Anchorage, Alaska, USA, August 5th, 2019, 5 page

    Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent

    Full text link
    Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability. Various adversarial attacks are proposed to sabotage the learning performance of DNN models. Among those, the black-box adversarial attack methods have received special attentions owing to their practicality and simplicity. Black-box attacks usually prefer less queries in order to maintain stealthy and low costs. However, most of the current black-box attack methods adopt the first-order gradient descent method, which may come with certain deficiencies such as relatively slow convergence and high sensitivity to hyper-parameter settings. In this paper, we propose a zeroth-order natural gradient descent (ZO-NGD) method to design the adversarial attacks, which incorporates the zeroth-order gradient estimation technique catering to the black-box attack scenario and the second-order natural gradient descent to achieve higher query efficiency. The empirical evaluations on image classification datasets demonstrate that ZO-NGD can obtain significantly lower model query complexities compared with state-of-the-art attack methods.Comment: accepted by AAAI 202

    (Methanol-κO){1-[2-(piperazin-4-ium-1-yl-κN 1)ethyl­imino­methyl-κN]naphthalen-2-olato-κO}bis­(thio­cyanato-κN)nickel(II) methanol monosolvate

    Get PDF
    In the title solvated complex, [Ni(C17H21N3O)(NCS)2(CH3OH)]·CH3OH, the Ni2+ ion is coordinated by one phenolate O, one imine N, and one amine N atom of the tridentate Schiff base ligand, two thio­cyanate N atoms and one methanol O atom, resulting in a distorted cis-NiO2N4 octa­hedral geometry. The chelate ring formed by the phenolate O and imine N atoms approximates to an envelope with the Ni atom as the flap, whereas the chelate ring formed by the two N atoms is twisted about the C—C bond. In the crystal, the components are linked by O—H⋯O, N—H⋯O, N—H⋯S, and O—H⋯S hydrogen bonds

    Block Switching: A Stochastic Approach for Deep Learning Security

    Get PDF
    Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy and produce arbitrary incorrect predictions, while maintaining imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time, hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning. Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent; and (iii) BS is compatible with other defenses and can be used jointly with others

    AdvMS: a multi-source multi-cost defense against adversarial attacks

    Full text link
    Designing effective defense against adversarial attacks is a crucial topic as deep neural networks have been proliferated rapidly in many security-critical domains such as malware detection and self-driving cars. Conventional defense methods, although shown to be promising, are largely limited by their single-source single-cost nature: The robustness promotion tends to plateau when the defenses are made increasingly stronger while the cost tends to amplify. In this paper, we study principles of designing multi-source and multi-cost schemes where defense performance is boosted from multiple defending components. Based on this motivation, we propose a multi-source and multi-cost defense scheme, Adversarially Trained Model Switching (AdvMS), that inherits advantages from two leading schemes: adversarial training and random model switching. We show that the multi-source nature of AdvMS mitigates the performance plateauing issue and the multi-cost nature enables improving robustness at a flexible and adjustable combination of costs over different factors which can better suit specific restrictions and needs in practice.Accepted manuscrip
    • …
    corecore