759 research outputs found

    Generating Master Faces for Use in Performing Wolf Attacks on Face Recognition Systems

    Get PDF
    Due to its convenience, biometric authentication, especial face authentication, has become increasingly mainstream and thus is now a prime target for attackers. Presentation attacks and face morphing are typical types of attack. Previous research has shown that finger-vein- and fingerprint-based authentication methods are susceptible to wolf attacks, in which a wolf sample matches many enrolled user templates. In this work, we demonstrated that wolf (generic) faces, which we call "master faces," can also compromise face recognition systems and that the master face concept can be generalized in some cases. Motivated by recent similar work in the fingerprint domain, we generated high-quality master faces by using the state-of-the-art face generator StyleGAN in a process called latent variable evolution. Experiments demonstrated that even attackers with limited resources using only pre-trained models available on the Internet can initiate master face attacks. The results, in addition to demonstrating performance from the attacker's point of view, can also be used to clarify and improve the performance of face recognition systems and harden face authentication systems.Comment: Accepted to be Published in Proceedings of the 2020 International Joint Conference on Biometrics (IJCB 2020), Houston, US

    엣지 μž₯λΉ„λ₯Ό μœ„ν•œ ν•œμ •λœ 데이터λ₯Ό κ°€μ§€λŠ” λ”₯λŸ¬λ‹ λΉ„μ „ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ λΉ λ₯Έ 적응

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀, 2022.2. 유승주.λ”₯ λŸ¬λ‹ 기반 λ°©λ²•μ˜ λ†€λΌμš΄ 성곡은 주둜 λ§Žμ€ μ–‘μ˜ λΆ„λ₯˜λœ λ°μ΄ν„°λ‘œ λ‹¬μ„±λ˜μ—ˆλ‹€. 전톡적인 기계 ν•™μŠ΅ 방법과 λΉ„κ΅ν•΄μ„œ λ”₯λŸ¬λ‹ 방법은 μ•„μ£Ό 큰 λ°μ΄ν„°μ…‹μœΌλ‘œλΆ€ν„° 쒋은 μ„±λŠ₯을 가진 λͺ¨λΈμ„ ν•™μŠ΅ν•  수 μžˆλ‹€. ν•˜μ§€λ§Œ κ³ ν’ˆμ§ˆμ˜ λΆ„λ₯˜λœ λ°μ΄ν„°λŠ” λ§Œλ“€κΈ° μ–΄λ ΅κ³  ν”„λΌμ΄λ²„μ‹œ 문제둜 λ§Œλ“€ 수 없을 λ•Œλ„ μžˆλ‹€. κ²Œλ‹€κ°€ μ‚¬λžŒμ€ μ•„μ£Ό 큰 λΆ„λ₯˜λœ 데이터가 없어도 ν›Œλ₯­ν•œ μΌλ°˜ν™” λŠ₯λ ₯을 보여쀀닀. 엣지 μž₯λΉ„λŠ” μ„œλ²„μ™€ λΉ„κ΅ν•΄μ„œ μ œν•œμ μΈ 계산 λŠ₯λ ₯을 가진닀. 특히 ν•™μŠ΅ 과정을 엣지 μž₯λΉ„μ—μ„œ μˆ˜ν–‰ν•˜λŠ” 것은 맀우 μ–΄λ ΅λ‹€. ν•˜μ§€λ§Œ, 도메인 λ³€ν™” λ¬Έμ œμ™€ ν”„λΌμ΄λ²„μ‹œ 문제λ₯Ό κ³ λ €ν–ˆμ„ λ•Œ 엣지 μž₯λΉ„μ—μ„œ ν•™μŠ΅ 과정을 μˆ˜ν–‰ν•˜λŠ” 것은 λ°”λžŒμ§ν•˜λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 계산λŠ₯λ ₯이 μž‘μ€ 엣지 μž₯λΉ„λ₯Ό μœ„ν•΄ 적응 과정을 전톡적인 ν•™μŠ΅ κ³Όμ • λŒ€μ‹  κ³ λ €ν•œλ‹€. 전톡적인 λΆ„λ₯˜ λ¬Έμ œλŠ” ν•™μŠ΅ 데이터와 ν…ŒμŠ€νŠΈ 데이터가 λ™μΌν•œ λΆ„ν¬μ—μ„œ νŒŒμƒλ˜μ—ˆμŒκ³Ό λ§Žμ€ μ–‘μ˜ ν•™μŠ΅ 데이터λ₯Ό κ°€μ •ν•œλ‹€. 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜μ€ ν…ŒμŠ€νŠΈ 데이터가 ν•™μŠ΅λ°μ΄ν„°μ™€ λ‹€λ₯Έ λΆ„ν¬μ—μ„œ νŒŒμƒλ˜λŠ” 상황을 κ°€μ •ν•˜λ©° 기쑴의 λΆ„λ₯˜λœ 데이터와 ν•™μŠ΅λœ λͺ¨λΈμ„ μ΄μš©ν•΄ μƒˆλ‘œμš΄ 데이터λ₯Ό λΆ„λ₯˜ν•˜λŠ” λ¬Έμ œμ΄λ‹€. 퓨샷 ν•™μŠ΅μ€ 적은 μ–‘μ˜ ν•™μŠ΅ 데이터λ₯Ό κ°€μ •ν•˜λ©° μ†Œμˆ˜μ˜ λΆ„λ₯˜λœ λ°μ΄ν„°λ§Œμ„ 가지고 μƒˆλ‘œμš΄ 데이터λ₯Ό λΆ„λ₯˜ν•˜λŠ” λ¬Έμ œμ΄λ‹€. 엣지 μž₯λΉ„λ₯Ό μœ„ν•΄ μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ 톡해 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜ μ„±λŠ₯을 κ°•ν™”ν•˜λŠ” 방법과 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅μ„ 톡해 퓨샷 ν•™μŠ΅ μ„±λŠ₯을 κ°•ν™”ν•˜λŠ” 방법을 μ œμ•ˆν•˜μ˜€λ‹€. 두 방법은 λͺ¨λ‘ 적은 λΆ„λ₯˜λœ 데이터 문제λ₯Ό 닀루며 λ‹€λ§Œ μ„œλ‘œ λ‹€λ₯Έ μ‹œλ‚˜λ¦¬μ˜€λ₯Ό κ°€μ •ν•œλ‹€. 첫 번째 방법은 엣지 μž₯λΉ„λ₯Ό μœ„ν•΄ λ„€νŠΈμ›Œν¬ λͺ¨λΈκ³Ό νŒŒλΌλ―Έν„° μ„ νƒμ˜ λ™μ‹œ μ΅œμ ν™”λ₯Ό 톡해 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜ μ„±λŠ₯을 κ°•ν™”ν•˜λŠ” 방법이닀. μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ€ Office 데이터셋과 같이 μž‘μ€ 데이터셋을 λ‹€λ£°λ•Œ 맀우 μ€‘μš”ν•˜λ‹€. νŠΉμ§• μΆ”μΆœκΈ°λ₯Ό κ°±μ‹ ν•˜μ§€ μ•ŠλŠ” 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜ μ•Œκ³ λ¦¬μ¦˜μ„ μ‚¬μš©ν•˜κ³  μ•„μ£Ό 큰 μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ μ‘°ν•©ν•˜λŠ” λ°©λ²•μœΌλ‘œ 높은 정확도λ₯Ό 얻을 수 μžˆλ‹€. 더 λ‚˜μ•„κ°€ 엣지 μž₯λΉ„λ₯Ό μœ„ν•΄ μž‘κ³  κ°€λ²Όμš΄ μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ μ‹€ν—˜ν•˜μ˜€λ‹€. μ§€μ—°μ‹œκ°„μ„ 쀄이기 μœ„ν•΄ 예츑기λ₯Ό λ„μž…ν•œ 진화 μ•Œκ³ λ¦¬μ¦˜μœΌλ‘œ λ°©λ²•μ˜ μ‹œμž‘λΆ€ν„° λκΉŒμ§€ μ΅œμ ν™”ν•˜μ˜€λ‹€. 그리고 ν”„λΌμ΄λ²„μ‹œλ₯Ό 지킀기 μœ„ν•œ 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜ μ‹œλ‚˜λ¦¬μ˜€μ— λŒ€ν•΄ κ³ λ €ν•˜μ˜€λ‹€. λ˜ν•œ 엣지 μž₯λΉ„μ—μ„œ μ’€ 더 ν˜„μ‹€μ μΈ μ‹œλ‚˜λ¦¬μ˜€μΈ μž‘μ€ 데이터셋과 object detection 에 λŒ€ν•΄μ„œλ„ μ‹€ν—˜ν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ 연속적인 데이터가 μž…λ ₯될 λ•Œ 쀑간 데이터λ₯Ό ν™œμš©ν•˜μ—¬ μ§€μ—°μ‹œκ°„μ„ 더 κ°μ†Œμ‹œν‚€λŠ” 방법을 μ‹€ν—˜ν•˜μ˜€λ‹€. Office31κ³Ό Office-Home 데이터셋에 λŒ€ν•΄ 각각 5.99배와 9.06λ°° μ§€μ—°μ‹œκ°„ κ°μ†Œλ₯Ό λ‹¬μ„±ν•˜μ˜€λ‹€. 두 번째 방법은 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅μ„ 톡해 퓨샷 ν•™μŠ΅ μ„±λŠ₯을 κ°•ν™”ν•˜λŠ” 방법이닀. 퓨샷 ν•™μŠ΅ λ²€μΉ˜λ§ˆν¬μ—μ„œλŠ” 베이슀 λ°μ΄ν„°μ…‹μœΌλ‘œ νŠΉμ§• μΆ”μΆœκΈ°λ₯Ό ν•™μŠ΅ν•˜κΈ° λ•Œλ¬Έμ— μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ μ‚¬μš©ν•  수 μ—†λ‹€. λŒ€μ‹ μ—, 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅μ„ 톡해 νŠΉμ§• μΆ”μΆœκΈ°λ₯Ό κ°•ν™”ν•œλ‹€. 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅κ³Ό 정보 μ΅œλŒ€ν™” 그리고 ν”„λ‘œν† νƒ€μž… μΆ”μ • 방법을 μ‘°ν•©ν•˜μ—¬ μ•„μ£Ό 높은 정확도λ₯Ό 얻을 수 μžˆλ‹€. νŠΉμ§• μΆ”μΆœκΈ°μ™€ 미리 끝내기λ₯Ό 톡해 μ΄λ ‡κ²Œ 얻은 정확도λ₯Ό μˆ˜ν–‰μ‹œκ°„ κ°μ†Œλ‘œ λ°”κΏ€ 수 μžˆλ‹€. νŠΈλžœμŠ€λ•ν‹°λΈŒ 5-웨이 5-μƒ· ν•™μŠ΅ μ‹œλ‚˜λ¦¬μ˜€μ—μ„œ 3.87λ°° μ§€μ—°μ‹œκ°„ κ°μ†Œλ₯Ό λ‹¬μ„±ν•˜μ˜€λ‹€. λ³Έ 방법은 정확도λ₯Ό μ¦κ°€μ‹œν‚¨ ν›„ μ§€μ—°μ‹œκ°„μ„ κ°μ†Œμ‹œν‚€λŠ” λ°©λ²•μœΌλ‘œ μš”μ•½ν•  수 μžˆλ‹€. λ¨Όμ € μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ μ“°κ±°λ‚˜ 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅μ„ 톡해 νŠΉμ§• μΆ”μΆœκΈ°λ₯Ό κ°•ν™”ν•΄μ„œ 높은 정확도λ₯Ό μ–»λŠ”λ‹€. κ·Έ ν›„ 진화 μ•Œκ³ λ¦¬μ¦˜μ„ 톡해 μ‹œμž‘λΆ€ν„° λκΉŒμ§€ μ΅œμ ν™”ν•˜κ±°λ‚˜ 미리 끝내기λ₯Ό 톡해 μ§€μ—°μ‹œκ°„μ„ 쀄인닀. 정확도λ₯Ό μ¦κ°€μ‹œν‚¨ ν›„ μ§€μ—°μ‹œκ°„μ„ κ°μ†Œμ‹œν‚€λŠ” 두 단계 μ ‘κ·Ό 방식은 엣지 μž₯λΉ„λ₯Ό μœ„ν•œ ν•œμ •λœ 데이터λ₯Ό κ°€μ§€λŠ” λ”₯λŸ¬λ‹ λΉ„μ „ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ λΉ λ₯Έ 적응을 λ‹¬μ„±ν•˜λŠ”λ° μΆ©λΆ„ν•˜λ‹€.The remarkable success of deep learning-based methods are mainly accomplished by a large amount of labeled data. Compared to conventional machine learning methods, deep learning-based methods are able to learn high quality model with a large dataset size. However, high-quality labeled data is expensive to obtain and sometimes preparing a large dataset is impossible due to privacy concern. Furthermore, human shows outstanding generalization performance without a huge amount of labeled data. Edge devices have a limited capability in computation compared to servers. Especially, it is challenging to implement training on edge devices. However, training on edge device is desirable when considering domain-shift problem and privacy concern. In this dissertation, I consider adaptation process as a conventional training counterpart for low computation capability edge device. Conventional classification assumes that training data and test data are drawn from the same distribution and training dataset is large. Unsupervised domain adaptation addresses the problem when training data and test data are drawn from different distribution and it is a problem to label target domain data using already existing labeled data and models. Few-shot learning assumes small training dataset and it is a task to predict new data based on only a few labeled data. I present 1) co-optimization of backbone network and parameter selection in unsupervised domain adaptation for edge device and 2) augmenting few-shot learning with supervised contrastive learning. Both methods are targeting low labeled data regime but different scenarios. The first method is to boost unsupervised domain adaptation by co-optimization of backbone network and parameter selection for edge device. Pre-trained ImageNet models are crucial when dealing with small dataset such as Office datasets. By using unsupervised domain adaptation algorithm that does not update feature extractor, large and powerful pre-trained ImageNet models can be used to boost the accuracy. We report state-of-the-art accuracy result with the method. Moreover, we conduct an experiment to use small and lightweight pre-trained ImageNet models for edge device. Co-optimization is performed to reduce the total latency by using predictor-guided evolutionary search. We also consider pre-extraction of source feature. We conduct more realistic scenario for edge device such as smaller target domain data and object detection. Lastly, We conduct an experiment to utilize intermediate domain data to reduce the algorithm latency further. We achieve 5.99x and 9.06x latency reduction on Office31 and Office-Home dataset, respectively. The second method is to augment few-shot learning with supervised contrastive learning. We cannot use pre-trained ImageNet model in the few-shot learning benchmark scenario as they provide base dataset to train the feature extractor from scratch. Instead, we augment the feature extractor with supervised contrastive learning method. Combining supervised contrastive learning with information maximization and prototype estimation technique, we report state-of-the-art accuracy result with the method. Then, we translate the accuracy gain to total runtime reduction by changing the feature extractor and early stopping. We achieve 3.87x latency reduction for transductive 5-way 5-shot learning scenarios. Our approach can be summarized as boosting the accuracy followed by latency reduction. We first upgrade the feature extractor by using more advanced pre-trained ImageNet model or by supervised contrastive learning to achieve state-of-the-art accuracy. Then, we optimize the method end-to-end with evolutionary search or early stopping to reduce the latency. Our two stage approach which consists of accuracy boosting and latency reduction is sufficient to achieve fast adaptation of deep learning vision applications with limited data for edge device.1. Introduction 1 2. Background 7 2.1 Dataset Size for Vision Applications 7 2.2 ImageNet Pre-trained Models 9 2.3 Augmentation Methods for ImageNet 12 2.4 Contrastive Learning 14 3. Problem Definitions and Solutions Overview 17 3.1 Problem Definitions 17 3.1.1 Unsupervised Domain Adaptation 17 3.1.2 Few-shot learning 18 3.2 Solutions overview 19 3.2.1 Co-optimization of Backbone Network and Parameter Selection in Unsupervised Domain Adaptation for Edge Device 20 3.2.2 Augmenting Few-Shot Learning with Supervised Contrastive Learning 21 4. Co-optimization of Backbone Network and Parameter Selection in Unsupervised Domain Adaptation for Edge Device 22 4.1 Introduction 23 4.2 Related Works 28 4.3 Methodology 33 4.3.1 Examining an Unsupervised Domain Adaptation Method 33 4.3.2 Boosting Accuracy with Pre-Trained ImageNet Models 36 4.3.3 Boosting Accuracy for Edge Device 38 4.3.4 Co-optimization of Backbone Network and Parameter Selection 39 4.4 Experiments 41 4.4.1 ImageNet and Unsupervised Domain Adaptation Accuracy 43 4.4.2 Accuracy with Once-For-All Network 52 4.4.3 Comparison with State-of-the-Art Results 58 4.4.4 Co-optimization for Edge Device 59 4.4.5 Pre-extraction of Source Feature 72 4.4.6 Results for Small Target Data Scenario 77 4.4.7 Results for Object Detection 78 4.4.8 Results for Classifier Fitting Using Intermediate Domain 80 4.4.9 Summary 81 4.5 Conclusion 84 5. Augmenting Few-Shot Learning with Supervised Contrastive Learning 85 5.1 Introduction 86 5.2 Related Works 89 5.3 Methodology 92 5.3.1 Examining A Few-shot Learning Method 92 5.3.2 Augmenting Few-shot Learning with Supervised Contrastive Learning 94 5.4 Experiments 97 5.4.1 Comparison to the State-of-the-Art 99 5.4.2 Ablation Study 102 5.4.3 Domain-Shift 105 5.4.4 Increasing the Number of Ways 106 5.4.5 Runtime Analysis 107 5.4.6 Limitations 109 5.5 Conclusion 110 6. Conclusion 111λ°•

    Confidence-and-Refinement Adaptation Model for Cross-Domain Semantic Segmentation

    Get PDF
    With the rapid development of convolutional neural networks (CNNs), significant progress has been achieved in semantic segmentation. Despite the great success, such deep learning approaches require large scale real-world datasets with pixel-level annotations. However, considering that pixel-level labeling of semantics is extremely laborious, many researchers turn to utilize synthetic data with free annotations. But due to the clear domain gap, the segmentation model trained with the synthetic images tends to perform poorly on the real-world datasets. Unsupervised domain adaptation (UDA) for semantic segmentation recently gains an increasing research attention, which aims at alleviating the domain discrepancy. Existing methods in this scope either simply align features or the outputs across the source and target domains or have to deal with the complex image processing and post-processing problems. In this work, we propose a novel multi-level UDA model named Confidence-and-Refinement Adaptation Model (CRAM), which contains a confidence-aware entropy alignment (CEA) module and a style feature alignment (SFA) module. Through CEA, the adaptation is done locally via adversarial learning in the output space, making the segmentation model pay attention to the high-confident predictions. Furthermore, to enhance the model transfer in the shallow feature space, the SFA module is applied to minimize the appearance gap across domains. Experiments on two challenging UDA benchmarks ``GTA5-to-Cityscapes'' and ``SYNTHIA-to-Cityscapes'' demonstrate the effectiveness of CRAM. We achieve comparable performance with the existing state-of-the-art works with advantages in simplicity and convergence speed

    Deep learning for time series classification

    Full text link
    Time series analysis is a field of data science which is interested in analyzing sequences of numerical values ordered in time. Time series are particularly interesting because they allow us to visualize and understand the evolution of a process over time. Their analysis can reveal trends, relationships and similarities across the data. There exists numerous fields containing data in the form of time series: health care (electrocardiogram, blood sugar, etc.), activity recognition, remote sensing, finance (stock market price), industry (sensors), etc. Time series classification consists of constructing algorithms dedicated to automatically label time series data. The sequential aspect of time series data requires the development of algorithms that are able to harness this temporal property, thus making the existing off-the-shelf machine learning models for traditional tabular data suboptimal for solving the underlying task. In this context, deep learning has emerged in recent years as one of the most effective methods for tackling the supervised classification task, particularly in the field of computer vision. The main objective of this thesis was to study and develop deep neural networks specifically constructed for the classification of time series data. We thus carried out the first large scale experimental study allowing us to compare the existing deep methods and to position them compared other non-deep learning based state-of-the-art methods. Subsequently, we made numerous contributions in this area, notably in the context of transfer learning, data augmentation, ensembling and adversarial attacks. Finally, we have also proposed a novel architecture, based on the famous Inception network (Google), which ranks among the most efficient to date.Comment: PhD thesi

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Multi-modality Medical Image Segmentation with Unsupervised Domain Adaptation

    Get PDF
    Advances in medical imaging have greatly aided in providing accurate and fast medical diagnosis, followed by recent deep learning developments enabling the efficient and cost-effective analysis of medical images. Among different image processing tasks, medical segmentation is one of the most crucial aspects because it provides the class, location, size, and shape of the subject of interest, which is invaluable and essential for diagnostics. Nevertheless, acquiring annotations for training data usually requires expensive manpower and specialised expertise, making supervised training difficult. To overcome these problems, unsupervised domain adaptation (UDA) has been adopted to bridge knowledge between different domains. Despite the appearance dissimilarities of different modalities such as MRI and CT, researchers have concluded that structural features of the same anatomy are universal across modalities, which unfolded the study of multi-modality image segmentation with UDA methods. The traditional UDA research tackled the domain shift problem by minimising the distance of the source and target distributions in latent spaces with the help of advanced mathematics. However, with the recent development of the generative adversarial network (GAN), the adversarial UDA methods have shown outstanding performance by producing synthetic images to mitigate the domain gap in training a segmentation network for the target domain. Most existing studies focus on modifying the network architecture, but few investigate the generative adversarial training strategy. Inspired by the recent success of state-of-the-art data augmentation techniques in classification tasks, we designed a novel mix-up strategy to assist GAN training for the better synthesis of structural details, consequently leading to better segmentation results. In this thesis, we propose SynthMix, an add-on module with a natural yet effective training policy that can promote synthetic quality without altering the network architecture. SynthMix is a mix-up synthesis scheme designed for integration with the adversarial logic of GAN networks. Traditional GAN approaches judge an image as a whole which could be easily dominated by discriminative features, resulting in little improvement of delicate structures in synthesis. In contrast, SynthMix uses the data augmentation technique to reinforce detail transformation at local regions. Specifically, it coherently mixes up aligned images of real and synthetic samples at local regions to stimulate the generation of fine-grained features examined by an associated inspector for domain-specific details. We evaluated our method on two segmentation benchmarks among three publicly available datasets. Our method showed a significant performance gain compared with existing state-of-the-art approaches
    • …
    corecore