401 research outputs found
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
Deep neural networks (DNN) have been shown to be useful in a wide range of
applications. However, they are also known to be vulnerable to adversarial
samples. By transforming a normal sample with some carefully crafted human
imperceptible perturbations, even highly accurate DNN make wrong decisions.
Multiple defense mechanisms have been proposed which aim to hinder the
generation of such adversarial samples. However, a recent work show that most
of them are ineffective. In this work, we propose an alternative approach to
detect adversarial samples at runtime. Our main observation is that adversarial
samples are much more sensitive than normal samples if we impose random
mutations on the DNN. We thus first propose a measure of `sensitivity' and show
empirically that normal samples and adversarial samples have distinguishable
sensitivity. We then integrate statistical hypothesis testing and model
mutation testing to check whether an input sample is likely to be normal or
adversarial at runtime by measuring its sensitivity. We evaluated our approach
on the MNIST and CIFAR10 datasets. The results show that our approach detects
adversarial samples generated by state-of-the-art attacking methods efficiently
and accurately.Comment: Accepted by ICSE 201
Proving Expected Sensitivity of Probabilistic Programs with Randomized Variable-Dependent Termination Time
The notion of program sensitivity (aka Lipschitz continuity) specifies that
changes in the program input result in proportional changes to the program
output. For probabilistic programs the notion is naturally extended to expected
sensitivity. A previous approach develops a relational program logic framework
for proving expected sensitivity of probabilistic while loops, where the number
of iterations is fixed and bounded. In this work, we consider probabilistic
while loops where the number of iterations is not fixed, but randomized and
depends on the initial input values. We present a sound approach for proving
expected sensitivity of such programs. Our sound approach is martingale-based
and can be automated through existing martingale-synthesis algorithms.
Furthermore, our approach is compositional for sequential composition of while
loops under a mild side condition. We demonstrate the effectiveness of our
approach on several classical examples from Gambler's Ruin, stochastic hybrid
systems and stochastic gradient descent. We also present experimental results
showing that our automated approach can handle various probabilistic programs
in the literature
Application of stochastic differential equations to option pricing
The financial world is a world of random things and unpredictable events. Along with the innovative development of diversity and complexity in modern financial market, there are more and more financial derivative emerged in the financial industry in order to gain higher yields as well as hedge the risk . As a result, to price the derivative , indeed the future uncertainty, become an interesting topic in the field of mathematical finance and financial quantitative analysis. In this thesis, I mainly focus on the application of stochastic differential equations to option pricing. Based on the arbitrage-free and risk-neutral assumption, I used the stochastic differential equations theory to solve the pricing problem for the European option of which underlying assets can be described by a geometric Brownian motion. The thesis explores the Black-Scholes model and forms an optimal control problem for the volatility that is an essential parameter in the Black-Scholes formula. Furthermore, the application of backward stochastic differential equations (BSDEs) has been discussed. I figured that BSDEs can model the pricing problem in a more clarifying and logical way. Also, based on the model discussed in the thesis, I provided a case study on pricing a Chinese option-like deposit product by using Mathematica, that shows the feasibility and applicability for the option pricing method based on stochastic differential equations
New taxonomic definition of the genus Neucentropus Martynov (Trichoptera: Polycentropodidae)
The genera Neucentropus Martynov and Kyopsyche Tsuda constitute a monophyletic group, such that Kyopsyche is a new synonym of Neucentropus and the type species of Kyopsyche, Kyopsyche japonica Tsuda 1942, is transferred into Neucentropus (new combination)
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System
In recent years, the security issues of artificial intelligence have become
increasingly prominent due to the rapid development of deep learning research
and applications. Backdoor attack is an attack targeting the vulnerability of
deep learning models, where hidden backdoors are activated by triggers embedded
by the attacker, thereby outputting malicious predictions that may not align
with the intended output for a given input. In this work, we propose a novel
black-box backdoor attack based on machine unlearning. The attacker first
augments the training set with carefully designed samples, including poison and
mitigation data, to train a `benign' model. Then, the attacker posts unlearning
requests for the mitigation samples to remove the impact of relevant data on
the model, gradually activating the hidden backdoor. Since backdoors are
implanted during the iterative unlearning process, it significantly increases
the computational overhead of existing defense methods for backdoor detection
or mitigation. To address this new security threat, we proposes two methods for
detecting or mitigating such malicious unlearning requests. We conduct the
experiment in both exact unlearning and approximate unlearning (i.e., SISA)
settings. Experimental results indicate that: 1) our attack approach can
successfully implant backdoor into the model, and sharding increases the
difficult of attack; 2) our detection algorithms are effective in identifying
the mitigation samples, while sharding reduces the effectiveness of our
detection algorithms
Unifying Qualitative and Quantitative Safety Verification of DNN-Controlled Systems
The rapid advance of deep reinforcement learning techniques enables the
oversight of safety-critical systems through the utilization of Deep Neural
Networks (DNNs). This underscores the pressing need to promptly establish
certified safety guarantees for such DNN-controlled systems. Most of the
existing verification approaches rely on qualitative approaches, predominantly
employing reachability analysis. However, qualitative verification proves
inadequate for DNN-controlled systems as their behaviors exhibit stochastic
tendencies when operating in open and adversarial environments. In this paper,
we propose a novel framework for unifying both qualitative and quantitative
safety verification problems of DNN-controlled systems. This is achieved by
formulating the verification tasks as the synthesis of valid neural barrier
certificates (NBCs). Initially, the framework seeks to establish almost-sure
safety guarantees through qualitative verification. In cases where qualitative
verification fails, our quantitative verification method is invoked, yielding
precise lower and upper bounds on probabilistic safety across both infinite and
finite time horizons. To facilitate the synthesis of NBCs, we introduce their
-inductive variants. We also devise a simulation-guided approach for
training NBCs, aiming to achieve tightness in computing precise certified lower
and upper bounds. We prototype our approach into a tool called
and showcase its efficacy on four classic DNN-controlled systems.Comment: This work is a technical report for the paper with the same name to
appear in the 36th International Conference on Computer Aided Verification
(CAV 2024
Proving expected sensitivity of probabilistic programs with randomized variable-dependent termination time
The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature
- …