3,136 research outputs found
Zero-Shot Learning -- A Comprehensive Evaluation of the Good, the Bad and the Ugly
Due to the importance of zero-shot learning, i.e. classifying images where
there is a lack of labeled training data, the number of proposed approaches has
recently increased steadily. We argue that it is time to take a step back and
to analyze the status quo of the area. The purpose of this paper is three-fold.
First, given the fact that there is no agreed upon zero-shot learning
benchmark, we first define a new benchmark by unifying both the evaluation
protocols and data splits of publicly available datasets used for this task.
This is an important contribution as published results are often not comparable
and sometimes even flawed due to, e.g. pre-training on zero-shot test classes.
Moreover, we propose a new zero-shot learning dataset, the Animals with
Attributes 2 (AWA2) dataset which we make publicly available both in terms of
image features and the images themselves. Second, we compare and analyze a
significant number of the state-of-the-art methods in depth, both in the
classic zero-shot setting but also in the more realistic generalized zero-shot
setting. Finally, we discuss in detail the limitations of the current status of
the area which can be taken as a basis for advancing it.Comment: Accepted by TPAMI in July, 2018. We introduce Proposed Split Version
2.0 (Please download it from our project webpage). arXiv admin note:
substantial text overlap with arXiv:1703.0439
Class Proportion Estimation with Application to Multiclass Anomaly Rejection
This work addresses two classification problems that fall under the heading
of domain adaptation, wherein the distributions of training and testing
examples differ. The first problem studied is that of class proportion
estimation, which is the problem of estimating the class proportions in an
unlabeled testing data set given labeled examples of each class. Compared to
previous work on this problem, our approach has the novel feature that it does
not require labeled training data from one of the classes. This property allows
us to address the second domain adaptation problem, namely, multiclass anomaly
rejection. Here, the goal is to design a classifier that has the option of
assigning a "reject" label, indicating that the instance did not arise from a
class present in the training data. We establish consistent learning strategies
for both of these domain adaptation problems, which to our knowledge are the
first of their kind. We also implement the class proportion estimation
technique and demonstrate its performance on several benchmark data sets.Comment: Accepted to AISTATS 2014. 15 pages. 2 figure
Hard-aware Instance Adaptive Self-training for Unsupervised Cross-domain Semantic Segmentation
The divergence between labeled training data and unlabeled testing data is a
significant challenge for recent deep learning models. Unsupervised domain
adaptation (UDA) attempts to solve such problem. Recent works show that
self-training is a powerful approach to UDA. However, existing methods have
difficulty in balancing the scalability and performance. In this paper, we
propose a hard-aware instance adaptive self-training framework for UDA on the
task of semantic segmentation. To effectively improve the quality and diversity
of pseudo-labels, we develop a novel pseudo-label generation strategy with an
instance adaptive selector. We further enrich the hard class pseudo-labels with
inter-image information through a skillfully designed hard-aware pseudo-label
augmentation. Besides, we propose the region-adaptive regularization to smooth
the pseudo-label region and sharpen the non-pseudo-label region. For the
non-pseudo-label region, consistency constraint is also constructed to
introduce stronger supervision signals during model optimization. Our method is
so concise and efficient that it is easy to be generalized to other UDA
methods. Experiments on GTA5 to Cityscapes, SYNTHIA to Cityscapes, and
Cityscapes to Oxford RobotCar demonstrate the superior performance of our
approach compared with the state-of-the-art methods.Comment: arXiv admin note: text overlap with arXiv:2008.1219
- …