15 research outputs found
Automatically Discovering and Learning New Visual Categories with Ranking Statistics
We tackle the problem of discovering novel classes in an image collection
given labelled examples of other classes. This setting is similar to
semi-supervised learning, but significantly harder because there are no
labelled examples for the new classes. The challenge, then, is to leverage the
information contained in the labelled images in order to learn a
general-purpose clustering model and use the latter to identify the new classes
in the unlabelled data. In this work we address this problem by combining three
ideas: (1) we suggest that the common approach of bootstrapping an image
representation using the labeled data only introduces an unwanted bias, and
that this can be avoided by using self-supervised learning to train the
representation from scratch on the union of labelled and unlabelled data; (2)
we use rank statistics to transfer the model's knowledge of the labelled
classes to the problem of clustering the unlabelled images; and, (3) we train
the data representation by optimizing a joint objective function on the
labelled and unlabelled subsets of the data, improving both the supervised
classification of the labelled data, and the clustering of the unlabelled data.
We evaluate our approach on standard classification benchmarks and outperform
current methods for novel category discovery by a significant margin.Comment: ICLR 2020, code: http://www.robots.ox.ac.uk/~vgg/research/auto_nove
Inverting Adversarially Robust Networks for Image Synthesis
Recent research in adversarially robust classifiers suggests their
representations tend to be aligned with human perception, which makes them
attractive for image synthesis and restoration applications. Despite favorable
empirical results on a few downstream tasks, their advantages are limited to
slow and sensitive optimization-based techniques. Moreover, their use on
generative models remains unexplored. This work proposes the use of robust
representations as a perceptual primitive for feature inversion models, and
show its benefits with respect to standard non-robust image features. We
empirically show that adopting robust representations as an image prior
significantly improves the reconstruction accuracy of CNN-based feature
inversion models. Furthermore, it allows reconstructing images at multiple
scales out-of-the-box. Following these findings, we propose an
encoding-decoding network based on robust representations and show its
advantages for applications such as anomaly detection, style transfer and image
denoising
Demystifying Assumptions in Learning to Discover Novel Classes
In learning to discover novel classes (L2DNC), we are given labeled data from
seen classes and unlabeled data from unseen classes, and we train clustering
models for the unseen classes. However, the rigorous definition of L2DNC is
unexplored, which results in that its implicit assumptions are still unclear.
In this paper, we demystify assumptions behind L2DNC and find that high-level
semantic features should be shared among the seen and unseen classes. This
naturally motivates us to link L2DNC to meta-learning that has exactly the same
assumption as L2DNC. Based on this finding, L2DNC is not only theoretically
solvable, but can also be empirically solved by meta-learning algorithms after
slight modifications. This L2DNC methodology significantly reduces the amount
of unlabeled data needed for training and makes it more practical, as
demonstrated in experiments. The use of very limited data is also justified by
the application scenario of L2DNC: since it is unnatural to label only
seen-class data, L2DNC is sampling instead of labeling in causality. Therefore,
unseen-class data should be collected on the way of collecting seen-class data,
which is why they are novel and first need to be clustered