958 research outputs found
Reduction from Complementary-Label Learning to Probability Estimates
Complementary-Label Learning (CLL) is a weakly-supervised learning problem
that aims to learn a multi-class classifier from only complementary labels,
which indicate a class to which an instance does not belong. Existing
approaches mainly adopt the paradigm of reduction to ordinary classification,
which applies specific transformations and surrogate losses to connect CLL back
to ordinary classification. Those approaches, however, face several
limitations, such as the tendency to overfit or be hooked on deep models. In
this paper, we sidestep those limitations with a novel perspective--reduction
to probability estimates of complementary classes. We prove that accurate
probability estimates of complementary labels lead to good classifiers through
a simple decoding step. The proof establishes a reduction framework from CLL to
probability estimates. The framework offers explanations of several key CLL
approaches as its special cases and allows us to design an improved algorithm
that is more robust in noisy environments. The framework also suggests a
validation procedure based on the quality of probability estimates, leading to
an alternative way to validate models with only complementary labels. The
flexible framework opens a wide range of unexplored opportunities in using deep
and non-deep models for probability estimates to solve the CLL problem.
Empirical experiments further verified the framework's efficacy and robustness
in various settings
SOOD: Towards Semi-Supervised Oriented Object Detection
Semi-Supervised Object Detection (SSOD), aiming to explore unlabeled data for
boosting object detectors, has become an active task in recent years. However,
existing SSOD approaches mainly focus on horizontal objects, leaving
multi-oriented objects that are common in aerial images unexplored. This paper
proposes a novel Semi-supervised Oriented Object Detection model, termed SOOD,
built upon the mainstream pseudo-labeling framework. Towards oriented objects
in aerial scenes, we design two loss functions to provide better supervision.
Focusing on the orientations of objects, the first loss regularizes the
consistency between each pseudo-label-prediction pair (includes a prediction
and its corresponding pseudo label) with adaptive weights based on their
orientation gap. Focusing on the layout of an image, the second loss
regularizes the similarity and explicitly builds the many-to-many relation
between the sets of pseudo-labels and predictions. Such a global consistency
constraint can further boost semi-supervised learning. Our experiments show
that when trained with the two proposed losses, SOOD surpasses the
state-of-the-art SSOD methods under various settings on the DOTA-v1.5
benchmark. The code will be available at https://github.com/HamPerdredes/SOOD.Comment: Accepted to CVPR 2023. Code will be available at
https://github.com/HamPerdredes/SOO
- …