74,241 research outputs found
Robustness of Unsupervised Representation Learning without Labels
Unsupervised representation learning leverages large unlabeled datasets and
is competitive with supervised learning. But non-robust encoders may affect
downstream task robustness. Recently, robust representation encoders have
become of interest. Still, all prior work evaluates robustness using a
downstream classification task. Instead, we propose a family of unsupervised
robustness measures, which are model- and task-agnostic and label-free. We
benchmark state-of-the-art representation encoders and show that none dominates
the rest. We offer unsupervised extensions to the FGSM and PGD attacks. When
used in adversarial training, they improve most unsupervised robustness
measures, including certified robustness. We validate our results against a
linear probe and show that, for MOCOv2, adversarial training results in 3 times
higher certified accuracy, a 2-fold decrease in impersonation attack success
rate and considerable improvements in certified robustness
Source-Free Domain Adaptation for Real-world Image Dehazing
Deep learning-based source dehazing methods trained on synthetic datasets
have achieved remarkable performance but suffer from dramatic performance
degradation on real hazy images due to domain shift. Although certain Domain
Adaptation (DA) dehazing methods have been presented, they inevitably require
access to the source dataset to reduce the gap between the source synthetic and
target real domains. To address these issues, we present a novel Source-Free
Unsupervised Domain Adaptation (SFUDA) image dehazing paradigm, in which only a
well-trained source model and an unlabeled target real hazy dataset are
available. Specifically, we devise the Domain Representation Normalization
(DRN) module to make the representation of real hazy domain features match that
of the synthetic domain to bridge the gaps. With our plug-and-play DRN module,
unlabeled real hazy images can adapt existing well-trained source networks.
Besides, the unsupervised losses are applied to guide the learning of the DRN
module, which consists of frequency losses and physical prior losses. Frequency
losses provide structure and style constraints, while the prior loss explores
the inherent statistic property of haze-free images. Equipped with our DRN
module and unsupervised loss, existing source dehazing models are able to
dehaze unlabeled real hazy images. Extensive experiments on multiple baselines
demonstrate the validity and superiority of our method visually and
quantitatively.Comment: Accepted to ACM MM 202
Representation Learning for Continuous Action Spaces is Beneficial for Efficient Policy Learning
Deep reinforcement learning (DRL) breaks through the bottlenecks of
traditional reinforcement learning (RL) with the help of the perception
capability of deep learning and has been widely applied in real-world
problems.While model-free RL, as a class of efficient DRL methods, performs the
learning of state representations simultaneously with policy learning in an
end-to-end manner when facing large-scale continuous state and action spaces.
However, training such a large policy model requires a large number of
trajectory samples and training time. On the other hand, the learned policy
often fails to generalize to large-scale action spaces, especially for the
continuous action spaces. To address this issue, in this paper we propose an
efficient policy learning method in latent state and action spaces. More
specifically, we extend the idea of state representations to action
representations for better policy generalization capability. Meanwhile, we
divide the whole learning task into learning with the large-scale
representation models in an unsupervised manner and learning with the
small-scale policy model in the RL manner.The small policy model facilitates
policy learning, while not sacrificing generalization and expressiveness via
the large representation model. Finally,the effectiveness of the proposed
method is demonstrated by MountainCar,CarRacing and Cheetah experiments
- …