103 research outputs found
Unsupervised Single Image Deraining with Self-supervised Constraints
Most existing single image deraining methods require learning supervised
models from a large set of paired synthetic training data, which limits their
generality, scalability and practicality in real-world multimedia applications.
Besides, due to lack of labeled-supervised constraints, directly applying
existing unsupervised frameworks to the image deraining task will suffer from
low-quality recovery. Therefore, we propose an Unsupervised Deraining
Generative Adversarial Network (UD-GAN) to tackle above problems by introducing
self-supervised constraints from the intrinsic statistics of unpaired rainy and
clean images. Specifically, we firstly design two collaboratively optimized
modules, namely Rain Guidance Module (RGM) and Background Guidance Module
(BGM), to take full advantage of rainy image characteristics: The RGM is
designed to discriminate real rainy images from fake rainy images which are
created based on outputs of the generator with BGM. Simultaneously, the BGM
exploits a hierarchical Gaussian-Blur gradient error to ensure background
consistency between rainy input and de-rained output. Secondly, a novel
luminance-adjusting adversarial loss is integrated into the clean image
discriminator considering the built-in luminance difference between real clean
images and derained images. Comprehensive experiment results on various
benchmarking datasets and different training settings show that UD-GAN
outperforms existing image deraining methods in both quantitative and
qualitative comparisons.Comment: 10 pages, 8 figure
RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining
As a common weather, rain streaks adversely degrade the image quality. Hence,
removing rains from an image has become an important issue in the field. To
handle such an ill-posed single image deraining task, in this paper, we
specifically build a novel deep architecture, called rain convolutional
dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks
and has clear interpretability. In specific, we first establish a RCD model for
representing rain streaks and utilize the proximal gradient descent technique
to design an iterative algorithm only containing simple operators for solving
the model. By unfolding it, we then build the RCDNet in which every network
module has clear physical meanings and corresponds to each operation involved
in the algorithm. This good interpretability greatly facilitates an easy
visualization and analysis on what happens inside the network and why it works
well in inference process. Moreover, taking into account the domain gap issue
in real scenarios, we further design a novel dynamic RCDNet, where the rain
kernels can be dynamically inferred corresponding to input rainy images and
then help shrink the space for rain layer estimation with few rain maps so as
to ensure a fine generalization performance in the inconsistent scenarios of
rain types between training and testing data. By end-to-end training such an
interpretable network, all involved rain kernels and proximal operators can be
automatically extracted, faithfully characterizing the features of both rain
and clean background layers, and thus naturally lead to better deraining
performance. Comprehensive experiments substantiate the superiority of our
method, especially on its well generality to diverse testing scenarios and good
interpretability for all its modules. Code is available in
\emph{\url{https://github.com/hongwang01/DRCDNet}}
μ¬μΈ΅ μ κ²½λ§μ μμ€νλ©΄ λ° λ₯λ¬λμ μ¬λ¬ μ μ©μ κ΄ν μ°κ΅¬
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : μμ°κ³Όνλν μ리과νλΆ, 2022. 8. κ°λͺ
μ£Ό.λ³Έ νμ λ
Όλ¬Έμ μ¬μΈ΅ μ κ²½λ§μ μμ€ νλ©΄μ λνμ¬ λ€λ£¬λ€. μ¬μΈ΅ μ κ²½λ§μ μμ€ ν¨μλ λ³Όλ‘ ν¨μμ κ°μ΄ λμ κ΅μμ μ κ°μ§λκ°? μ‘°κ°μ μΌλ‘ μ νμ νμ±ν¨μλ₯Ό κ°μ§λ κ²½μ°μ λν΄μλ μ μλ €μμ§λ§, μΌλ°μ μΈ λ§€λλ¬μ΄ νμ±ν¨μλ₯Ό κ°μ§λ μ¬μΈ΅ μ κ²½λ§μ λν΄μλ μμ§κΉμ§ μλ €μ§μ§ μμ κ²μ΄ λ§λ€. λ³Έ μ°κ΅¬μμλ λμ κ΅μμ μ΄ μΌλ°μ μΈ λ§€λλ¬μ΄ νμ±ν¨μμμλ μ‘΄μ¬ν¨μ 보μΈλ€. μ΄κ²μ μ¬μΈ΅ μ κ²½λ§μ μμ€ νλ©΄μ λν μ΄ν΄μ λΆλΆμ μΈ μ€λͺ
μ μ κ³΅ν΄ μ€ κ²μ΄λ€. μΆκ°μ μΌλ‘ λ³Έ λ
Όλ¬Έμμλ νμ΅ μ΄λ‘ , μ¬μν 보νΈμ μΈ κΈ°κ³ νμ΅, μ»΄ν¨ν° λΉμ λ±μ λΆμΌμμμ μ¬μΈ΅ μ κ²½λ§μ λ€μν μμ©μ μ λ³΄μΌ μμ μ΄λ€.In this thesis, we study the loss surface of deep neural networks. Does the loss function of deep neural network have no bad local minimum like the convex function? Although it is well known for piece-wise linear activations, not much is known for the general smooth activations. We explore that a bad local minimum also exists for general smooth activations. In addition, we characterize the types of such local minima. This provides a partial explanation for the understanding of the loss surface of deep neural networks. Additionally, we present several applications of deep neural networks in learning theory, private machine learning, and computer vision.Abstract v
1 Introduction 1
2 Existence of local minimum in neural network 4
2.1 Introduction 4
2.2 Local Minima and Deep Neural Network 6
2.2.1 Notation and Model 6
2.2.2 Local Minima and Deep Linear Network 6
2.2.3 Local Minima and Deep Neural Network with piece-wise linear activations 8
2.2.4 Local Minima and Deep Neural Network with smooth activations 10
2.2.5 Local Valley and Deep Neural Network 11
2.3 Existence of local minimum for partially linear activations 12
2.4 Absence of local minimum in the shallow network for small N 17
2.5 Existence of local minimum in the shallow network 20
2.6 Local Minimum Embedding 36
3 Self-Knowledge Distillation via Dropout 40
3.1 Introduction 40
3.2 Related work 43
3.2.1 Knowledge Distillation 43
3.2.2 Self-Knowledge Distillation 44
3.2.3 Semi-supervised and Self-supervised Learning 44
3.3 Self Distillation via Dropout 45
3.3.1 Method Formulation 46
3.3.2 Collaboration with other method 47
3.3.3 Forward versus reverse KL-Divergence 48
3.4 Experiments 53
3.4.1 Implementation Details 53
3.4.2 Results 54
3.5 Conclusion 62
4 Membership inference attacks against object detection models 63
4.1 Introduction 63
4.2 Background and Related Work 65
4.2.1 Membership Inference Attack 65
4.2.2 Object Detection 66
4.2.3 Datasets 67
4.3 Attack Methodology 67
4.3.1 Motivation 69
4.3.2 Gradient Tree Boosting 69
4.3.3 Convolutional Neural Network Based Method 70
4.3.4 Transfer Attack 73
4.4 Defense 73
4.4.1 Dropout 73
4.4.2 Diff erentially Private Algorithm 74
4.5 Experiments 75
4.5.1 Target and Shadow Model Setup 75
4.5.2 Attack Model Setup 77
4.5.3 Experiment Results 78
4.5.4 Transfer Attacks 80
4.5.5 Defense 81
4.6 Conclusion 81
5 Single Image Deraining 82
5.1 Introduction 82
5.2 Related Work 86
5.3 Proposed Network 89
5.3.1 Multi-Level Connection 89
5.3.2 Wide Regional Non-Local Block 92
5.3.3 Discrete Wavelet Transform 94
5.3.4 Loss Function 94
5.4 Experiments 95
5.4.1 Datasets and Evaluation Metrics 95
5.4.2 Datasets and Experiment Details 96
5.4.3 Evaluations 97
5.4.4 Ablation Study 104
5.4.5 Applications for Other Tasks 107
5.4.6 Analysis on multi-level features 109
5.5 Conclusion 111
The bibliography 112
Abstract (in Korean) 129λ°
- β¦