1,949 research outputs found

    의미보쑴 μ λŒ€μ  ν•™μŠ΅

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀, 2021. 2. 이상ꡬ.Adversarial training is a defense technique that improves adversarial robustness of a deep neural network (DNN) by including adversarial examples in the training data. In this paper, we identify an overlooked problem of adversarial training in that these adversarial examples often have different semantics than the original data, introducing unintended biases into the model. We hypothesize that such non-semantics-preserving (and resultingly ambiguous) adversarial data harm the robustness of the target models. To mitigate such unintended semantic changes of adversarial examples, we propose semantics-preserving adversarial training (SPAT) which encourages perturbation on the pixels that are shared among all classes when generating adversarial examples in the training stage. Experiment results show that SPAT improves adversarial robustness and achieves state-of-the-art results in CIFAR-10, CIFAR-100, and STL-10.μ λŒ€μ  ν•™μŠ΅μ€ μ λŒ€μ  예제λ₯Ό ν•™μŠ΅ 데이터에 ν¬ν•¨μ‹œν‚΄μœΌλ‘œμ¨ 심측 μ‹ κ²½λ§μ˜ μ λŒ€μ  강건성을 κ°œμ„ ν•˜λŠ” λ°©μ–΄ 방법이닀. 이 λ…Όλ¬Έμ—μ„œλŠ” μ λŒ€μ  μ˜ˆμ œλ“€μ΄ 원본 λ°μ΄ν„°μ™€λŠ” λ•Œλ•Œλ‘œ λ‹€λ₯Έ 의미λ₯Ό 가지며, λͺ¨λΈμ— μ˜λ„ν•˜μ§€ μ•Šμ€ 편ν–₯을 집어 λ„£λŠ”λ‹€λŠ” κΈ°μ‘΄μ—λŠ” κ°„κ³Όλ˜μ–΄μ™”λ˜ μ λŒ€μ  ν•™μŠ΅μ˜ 문제λ₯Ό λ°νžŒλ‹€. μš°λ¦¬λŠ” μ΄λŸ¬ν•œ 의미λ₯Ό λ³΄μ‘΄ν•˜μ§€ μ•ŠλŠ”, 그리고 결과적으둜 애맀λͺ¨ν˜Έν•œ μ λŒ€μ  데이터가 λͺ©ν‘œ λͺ¨λΈμ˜ 강건성을 ν•΄μΉœλ‹€κ³  가섀을 μ„Έμ› λ‹€. μš°λ¦¬λŠ” μ΄λŸ¬ν•œ μ λŒ€μ  μ˜ˆμ œλ“€μ˜ μ˜λ„ν•˜μ§€ μ•Šμ€ 의미적 λ³€ν™”λ₯Ό μ™„ν™”ν•˜κΈ° μœ„ν•΄, ν•™μŠ΅ λ‹¨κ³„μ—μ„œ μ λŒ€μ  μ˜ˆμ œλ“€μ„ 생성할 λ•Œ λͺ¨λ“  ν΄λž˜μŠ€λ“€μ—κ²Œμ„œ κ³΅μœ λ˜λŠ” 픽셀에 κ΅λž€ν•˜λ„λ‘ ꢌμž₯ν•˜λŠ”, 의미 보쑴 μ λŒ€μ  ν•™μŠ΅μ„ μ œμ•ˆν•œλ‹€. μ‹€ν—˜ κ²°κ³ΌλŠ” 의미 보쑴 μ λŒ€μ  ν•™μŠ΅μ΄ μ λŒ€μ  강건성을 κ°œμ„ ν•˜λ©°, CIFAR-10κ³Ό CIFAR-100κ³Ό STL-10μ—μ„œ 졜고의 μ„±λŠ₯을 달성함을 보인닀.Chapter 1 Introduction 1 Chapter 2 Preliminaries 5 Chapter 3 Related Works 9 Chapter 4 Semantics-Preserving Adversarial Training 11 4.1 Problem of PGD-training . . . . . . . . . . . . . . . . . . . . . . 11 4.2 Semantics-Preserving Adversarial Training . . . . . . . . . . . . . 13 4.3 Combining with Adversarial Training Variants . . . . . . . . . . 14 Chapter 5 Analysis of Adversarial Examples 16 5.1 Visualizing Various Adversarial Examples . . . . . . . . . . . . . 16 5.2 Comparing the Attack Success Rate . . . . . . . . . . . . . . . . 17 Chapter 6 Experiments & Results 22 6.1 Evaluating Robustness . . . . . . . . . . . . . . . . . . . . . . . . 22 6.1.1 CIFAR-10 & CIFAR-100 . . . . . . . . . . . . . . . . . . . 22 6.1.2 CIFAR-10 with 500K Unlabeled Data . . . . . . . . . . . 24 6.1.3 STL-10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.2 Effect of Label Smoothing HyperparameterΞ±. . . . . . . . . . . 25 Chapter 7 Conclusion & Future Work 29Maste

    Domain-adaptive Message Passing Graph Neural Network

    Full text link
    Cross-network node classification (CNNC), which aims to classify nodes in a label-deficient target network by transferring the knowledge from a source network with abundant labels, draws increasing attention recently. To address CNNC, we propose a domain-adaptive message passing graph neural network (DM-GNN), which integrates graph neural network (GNN) with conditional adversarial domain adaptation. DM-GNN is capable of learning informative representations for node classification that are also transferrable across networks. Firstly, a GNN encoder is constructed by dual feature extractors to separate ego-embedding learning from neighbor-embedding learning so as to jointly capture commonality and discrimination between connected nodes. Secondly, a label propagation node classifier is proposed to refine each node's label prediction by combining its own prediction and its neighbors' prediction. In addition, a label-aware propagation scheme is devised for the labeled source network to promote intra-class propagation while avoiding inter-class propagation, thus yielding label-discriminative source embeddings. Thirdly, conditional adversarial domain adaptation is performed to take the neighborhood-refined class-label information into account during adversarial domain adaptation, so that the class-conditional distributions across networks can be better matched. Comparisons with eleven state-of-the-art methods demonstrate the effectiveness of the proposed DM-GNN
    • …
    corecore