1,949 research outputs found
μ미보쑴 μ λμ νμ΅
νμλ
Όλ¬Έ (μμ¬) -- μμΈλνκ΅ λνμ : 곡과λν μ»΄ν¨ν°κ³΅νλΆ, 2021. 2. μ΄μꡬ.Adversarial training is a defense technique that improves adversarial robustness of a deep neural network (DNN) by including adversarial examples in the training data. In this paper, we identify an overlooked problem of adversarial training in that these adversarial examples often have different semantics than the original data, introducing unintended biases into the model. We hypothesize that such non-semantics-preserving (and resultingly ambiguous) adversarial data harm the robustness of the target models. To mitigate such unintended semantic changes of adversarial examples, we propose semantics-preserving adversarial
training (SPAT) which encourages perturbation on the pixels that are shared among all classes when generating adversarial examples in the training stage. Experiment results show that SPAT improves adversarial robustness and achieves state-of-the-art results in CIFAR-10, CIFAR-100, and STL-10.μ λμ νμ΅μ μ λμ μμ λ₯Ό νμ΅ λ°μ΄ν°μ ν¬ν¨μν΄μΌλ‘μ¨ μ¬μΈ΅ μ κ²½λ§μ μ λμ κ°κ±΄μ±μ κ°μ νλ λ°©μ΄ λ°©λ²μ΄λ€. μ΄ λ
Όλ¬Έμμλ μ λμ μμ λ€μ΄ μλ³Έ λ°μ΄ν°μλ λλλ‘ λ€λ₯Έ μλ―Έλ₯Ό κ°μ§λ©°, λͺ¨λΈμ μλνμ§ μμ νΈν₯μ μ§μ΄ λ£λλ€λ κΈ°μ‘΄μλ κ°κ³Όλμ΄μλ μ λμ νμ΅μ λ¬Έμ λ₯Ό λ°νλ€. μ°λ¦¬λ μ΄λ¬ν μλ―Έλ₯Ό 보쑴νμ§ μλ, κ·Έλ¦¬κ³ κ²°κ³Όμ μΌλ‘ μ 맀λͺ¨νΈν μ λμ λ°μ΄ν°κ° λͺ©ν λͺ¨λΈμ κ°κ±΄μ±μ ν΄μΉλ€κ³ κ°μ€μ μΈμ λ€. μ°λ¦¬λ μ΄λ¬ν μ λμ μμ λ€μ μλνμ§ μμ μλ―Έμ λ³νλ₯Ό μννκΈ° μν΄, νμ΅ λ¨κ³μμ μ λμ μμ λ€μ μμ±ν λ λͺ¨λ ν΄λμ€λ€μκ²μ 곡μ λλ ν½μ
μ κ΅λνλλ‘ κΆμ₯νλ, μλ―Έ 보쑴 μ λμ νμ΅μ μ μνλ€. μ€ν κ²°κ³Όλ μλ―Έ 보쑴 μ λμ νμ΅μ΄ μ λμ κ°κ±΄μ±μ κ°μ νλ©°, CIFAR-10κ³Ό CIFAR-100κ³Ό STL-10μμ μ΅κ³ μ μ±λ₯μ λ¬μ±ν¨μ 보μΈλ€.Chapter 1 Introduction 1
Chapter 2 Preliminaries 5
Chapter 3 Related Works 9
Chapter 4 Semantics-Preserving Adversarial Training 11
4.1 Problem of PGD-training . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Semantics-Preserving Adversarial Training . . . . . . . . . . . . . 13
4.3 Combining with Adversarial Training Variants . . . . . . . . . . 14
Chapter 5 Analysis of Adversarial Examples 16
5.1 Visualizing Various Adversarial Examples . . . . . . . . . . . . . 16
5.2 Comparing the Attack Success Rate . . . . . . . . . . . . . . . . 17
Chapter 6 Experiments & Results 22
6.1 Evaluating Robustness . . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.1 CIFAR-10 & CIFAR-100 . . . . . . . . . . . . . . . . . . . 22
6.1.2 CIFAR-10 with 500K Unlabeled Data . . . . . . . . . . . 24
6.1.3 STL-10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2 Effect of Label Smoothing HyperparameterΞ±. . . . . . . . . . . 25
Chapter 7 Conclusion & Future Work 29Maste
Domain-adaptive Message Passing Graph Neural Network
Cross-network node classification (CNNC), which aims to classify nodes in a
label-deficient target network by transferring the knowledge from a source
network with abundant labels, draws increasing attention recently. To address
CNNC, we propose a domain-adaptive message passing graph neural network
(DM-GNN), which integrates graph neural network (GNN) with conditional
adversarial domain adaptation. DM-GNN is capable of learning informative
representations for node classification that are also transferrable across
networks. Firstly, a GNN encoder is constructed by dual feature extractors to
separate ego-embedding learning from neighbor-embedding learning so as to
jointly capture commonality and discrimination between connected nodes.
Secondly, a label propagation node classifier is proposed to refine each node's
label prediction by combining its own prediction and its neighbors' prediction.
In addition, a label-aware propagation scheme is devised for the labeled source
network to promote intra-class propagation while avoiding inter-class
propagation, thus yielding label-discriminative source embeddings. Thirdly,
conditional adversarial domain adaptation is performed to take the
neighborhood-refined class-label information into account during adversarial
domain adaptation, so that the class-conditional distributions across networks
can be better matched. Comparisons with eleven state-of-the-art methods
demonstrate the effectiveness of the proposed DM-GNN
- β¦