3 research outputs found
RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining
As a common weather, rain streaks adversely degrade the image quality. Hence,
removing rains from an image has become an important issue in the field. To
handle such an ill-posed single image deraining task, in this paper, we
specifically build a novel deep architecture, called rain convolutional
dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks
and has clear interpretability. In specific, we first establish a RCD model for
representing rain streaks and utilize the proximal gradient descent technique
to design an iterative algorithm only containing simple operators for solving
the model. By unfolding it, we then build the RCDNet in which every network
module has clear physical meanings and corresponds to each operation involved
in the algorithm. This good interpretability greatly facilitates an easy
visualization and analysis on what happens inside the network and why it works
well in inference process. Moreover, taking into account the domain gap issue
in real scenarios, we further design a novel dynamic RCDNet, where the rain
kernels can be dynamically inferred corresponding to input rainy images and
then help shrink the space for rain layer estimation with few rain maps so as
to ensure a fine generalization performance in the inconsistent scenarios of
rain types between training and testing data. By end-to-end training such an
interpretable network, all involved rain kernels and proximal operators can be
automatically extracted, faithfully characterizing the features of both rain
and clean background layers, and thus naturally lead to better deraining
performance. Comprehensive experiments substantiate the superiority of our
method, especially on its well generality to diverse testing scenarios and good
interpretability for all its modules. Code is available in
\emph{\url{https://github.com/hongwang01/DRCDNet}}
์์ ๋ณต์ ๋ฌธ์ ์ ๋ณ๋ถ๋ฒ์ ์ ๊ทผ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ์๋ฆฌ๊ณผํ๋ถ, 2013. 2. ๊ฐ๋ช
์ฃผ.Image restoration has been an active research area in image processing and computer vision during the past several decades. We explore variational partial
differential equations (PDE) models in image restoration problem. We start our discussion by reviewing classical models, by which the works of this dissertation are highly motivated. The content of the dissertation is divided
into two main subjects. First topic is on image denoising, where we propose non-convex hybrid total variation model, and then we apply iterative reweighted algorithm to solve the proposed model. Second topic is on image
decomposition, in which we separate an image into structural component and oscillatory component using local gradient constraint.Abstract i
1 Introduction 1
1.1 Image restoration 2
1.2 Brief overview of the dissertation 3
2 Previous works 4
2.1 Image denoising 4
2.1.1 Fundamental model 4
2.1.2 Higher order model 7
2.1.3 Hybrid model 9
2.1.4 Non-convex model 12
2.2 Image decomposition 22
2.2.1 Meyers model 23
2.2.2 Nonlinear filter 24
3 Non-convex hybrid TV for image denoising 28
3.1 Variational model with non-convex hybrid TV 29
3.1.1 Non-convex TV model and non-convex HOTV model 29
3.1.2 The Proposed model: Non-convex hybrid TV model 31
3.2 Iterative reweighted hybrid Total Variation algorithm 33
3.3 Numerical experiments 35
3.3.1 Parameter values 37
3.3.2 Comparison between the non-convex TV model and
the non-convex HOTV model 38
3.3.3 Comparison with other non-convex higher order regularizers 40
3.3.4 Comparison between two non-convex hybrid TV models 42
3.3.5 Comparison with Krishnan et al. [39] 43
3.3.6 Comparison with state-of-the-art 44
4 Image decomposition 59
4.1 Local gradient constraint 61
4.1.1 Texture estimator 62
4.2 The proposed model 65
4.2.1 Algorithm : Anisotropic TV-L2 67
4.2.2 Algorithm : Isotropic TV-L2 69
4.2.3 Algorithm : Isotropic TV-L1 71
4.3 Numerical experiments and discussion 72
5 Conclusion and future works 80
Abstract (in Korean) 92Docto
Designing content-based adversarial perturbations and distributed one-class learning for images.
PhD Theses.This thesis covers two privacy-related problems for images: designing adversarial perturbations
that can be added to the input images to protect the private content of images that a
user shares with other users from the undesirable automatic inference of classifiers, and training
privacy-preserving classifiers on images that are distributed among their owners (image holders)
and contain their private information.
Adversarial images can be easily detected using denoising algorithms when high-frequency
spatial perturbations are used, or can be noticed by humans when perturbations are large and
irrelevant to the content of images. In addition to this, adversarial images are not transferable
to unseen classifiers as perturbations are small (in terms of the lp norm). In the first part of
the thesis, we propose content-based adversarial perturbations that account for the content of
the images (objects, colour, structure and details), human perception and the semantics of the
class labels to address the above-mentioned limitations of perturbations. Our adversarial colour
perturbations selectively modify the colours of objects within chosen ranges that are perceived as
natural by humans. In addition to these natural-looking adversarial images, our structure-aware
perturbations exploit traditional image processing filters, such as detail enhancement filter and
Gamma correction filter, to generate enhanced adversarial images. We validate the proposed
perturbations against three classifiers trained on ImageNet. Experiments show that the proposed
perturbations are more robust and transferable and cause misclassification with a label that is
semantically different from the label of the original image, when compared with seven state-ofthe-
art perturbations.
Classifiers are often trained by relying on centralised collection and aggregation of images
that could lead to significant privacy concerns by disclosing the sensitive information of image
holders. In the second part of the thesis, we propose a privacy-preserving technique, called
distributed one-class learning, that enables training to take place on edge devices and therefore
image holders do not need to centralise their images. Each image holder can independently use
their images to locally train a reconstructive adversarial network as their one-class classifier. As
sending the model parameters to the service provider would reveal sensitive information, we
secret-share the parameters among two non-colluding service providers. Then, we provide cryptographically
private prediction services through a mixture of multi-party computation protocols
to achieve substantial gains in complexity and speed. A major advantage of the proposed technique
is that none of the image holders and service providers can access the parameters and images
of other image holders. We quantify the benefits of the proposed technique and compare its
3
4
performance with centralised training on three privacy-sensitive image-based tasks. Experiments
show that the proposed technique achieves similar classification performance as non-private centralised
training, while not violating the privacy of the image holders