109 research outputs found

    Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding

    Get PDF
    Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning. Adversarial attacks, which produce adversarial examples, increase the prediction likelihood of a target class for a particular data point. During this process, the adversarial example can be further optimized, even when it has already been wrongly classified with 100% confidence, thus making the adversarial example even more difficult to detect. For this kind of adversarial examples, which we refer to as over-optimized adversarial examples, we discovered that the logits of the model provide solid clues on whether the data point at hand is adversarial or genuine. In this context, we first discuss the masking effect of the softmax function for the prediction made and explain why the logits of the model are more useful in detecting over-optimized adversarial examples. To identify this type of adversarial examples in practice, we propose a non-parametric and computationally efficient method which relies on interquartile range, with this method becoming more effective as the image resolution increases. We support our observations throughout the paper with detailed experiments for different datasets (MNIST, CIFAR-10, and ImageNet) and several architectures

    An automated model reduction method for biochemical reaction networks

    Get PDF
    We propose a new approach to the model reduction of biochemical reaction networks governed by various types of enzyme kinetics rate laws with non-autocatalytic reactions, each of which can be reversible or irreversible. This method extends the approach for model reduction previously proposed by Rao et al. which proceeds by the step-wise reduction in the number of complexes by Kron reduction of the weighted Laplacian corresponding to the complex graph of the network. The main idea in the current manuscript is based on rewriting the mathematical model of a reaction network as a model of a network consisting of linkage classes that contain more than one reaction. It is done by joining certain distinct linkage classes into a single linkage class by using the conservation laws of the network. We show that this adjustment improves the extent of applicability of the method proposed by Rao et al. We automate the entire reduction procedure using Matlab. We test our automated model reduction to two real-life reaction networks, namely, a model of neural stem cell regulation and a model of hedgehog signaling pathway. We apply our reduction approach to meaningfully reduce the number of complexes in the complex graph corresponding to these networks. When the number of species' concentrations in the model of neural stem cell regulation is reduced by 33.33%, the difference between the dynamics of the original model and the reduced model, quantified by an error integral, is only 4.85%. Likewise, when the number of species' concentrations is reduced by 33.33% in the model of hedgehog signaling pathway, the difference between the dynamics of the original model and the reduced model is only 6.59%

    How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples

    Full text link
    Even before deep learning architectures became the de facto models for complex computer vision tasks, the softmax function was, given its elegant properties, already used to analyze the predictions of feedforward neural networks. Nowadays, the output of the softmax function is also commonly used to assess the strength of adversarial examples: malicious data points designed to fail machine learning models during the testing phase. However, in this paper, we show that it is possible to generate adversarial examples that take advantage of some properties of the softmax function, leading to undesired outcomes when interpreting the strength of the adversarial examples at hand. Specifically, we argue that the output of the softmax function is a poor indicator when the strength of an adversarial example is analyzed and that this indicator can be easily tricked by already existing methods for adversarial example generation
    • …
    corecore