670 research outputs found
Maturity-Aware Active Learning for Semantic Segmentation with Hierarchically-Adaptive Sample Assessment
Active Learning (AL) for semantic segmentation is challenging due to heavy
class imbalance and different ways of defining "sample" (pixels, areas, etc.),
leaving the interpretation of the data distribution ambiguous. We propose
"Maturity-Aware Distribution Breakdown-based Active Learning'' (MADBAL), an AL
method that benefits from a hierarchical approach to define a multiview data
distribution, which takes into account the different "sample" definitions
jointly, hence able to select the most impactful segmentation pixels with
comprehensive understanding. MADBAL also features a novel uncertainty
formulation, where AL supporting modules are included to sense the features'
maturity whose weighted influence continuously contributes to the uncertainty
detection. In this way, MADBAL makes significant performance leaps even in the
early AL stage, hence reducing the training burden significantly. It
outperforms state-of-the-art methods on Cityscapes and PASCAL VOC datasets as
verified in our extensive experiments.Comment: Accepted to the 34th British Machine Vision Conference (BMVC 2023
EdgeAL: An Edge Estimation Based Active Learning Approach for OCT Segmentation
Active learning algorithms have become increasingly popular for training
models with limited data. However, selecting data for annotation remains a
challenging problem due to the limited information available on unseen data. To
address this issue, we propose EdgeAL, which utilizes the edge information of
unseen images as {\it a priori} information for measuring uncertainty. The
uncertainty is quantified by analyzing the divergence and entropy in model
predictions across edges. This measure is then used to select superpixels for
annotation. We demonstrate the effectiveness of EdgeAL on multi-class Optical
Coherence Tomography (OCT) segmentation tasks, where we achieved a 99% dice
score while reducing the annotation label cost to 12%, 2.3%, and 3%,
respectively, on three publicly available datasets (Duke, AROI, and UMN). The
source code is available at \url{https://github.com/Mak-Ta-Reque/EdgeAL}Comment: This version of the contribution has been submitted in miccai202
Active Learning for Semantic Segmentation with Multi-class Label Query
This paper proposes a new active learning method for semantic segmentation.
The core of our method lies in a new annotation query design. It samples
informative local image regions (e.g., superpixels), and for each of such
regions, asks an oracle for a multi-hot vector indicating all classes existing
in the region. This multi-class labeling strategy is substantially more
efficient than existing ones like segmentation, polygon, and even dominant
class labeling in terms of annotation time per click. However, it introduces
the class ambiguity issue in training since it assigns partial labels (i.e., a
set of candidate classes) to individual pixels. We thus propose a new algorithm
for learning semantic segmentation while disambiguating the partial labels in
two stages. In the first stage, it trains a segmentation model directly with
the partial labels through two new loss functions motivated by partial label
learning and multiple instance learning. In the second stage, it disambiguates
the partial labels by generating pixel-wise pseudo labels, which are used for
supervised learning of the model. Equipped with a new acquisition function
dedicated to the multi-class labeling, our method outperformed previous work on
Cityscapes and PASCAL VOC 2012 while spending less annotation cost
Adaptive Superpixel for Active Learning in Semantic Segmentation
Learning semantic segmentation requires pixel-wise annotations, which can be
time-consuming and expensive. To reduce the annotation cost, we propose a
superpixel-based active learning (AL) framework, which collects a dominant
label per superpixel instead. To be specific, it consists of adaptive
superpixel and sieving mechanisms, fully dedicated to AL. At each round of AL,
we adaptively merge neighboring pixels of similar learned features into
superpixels. We then query a selected subset of these superpixels using an
acquisition function assuming no uniform superpixel size. This approach is more
efficient than existing methods, which rely only on innate features such as RGB
color and assume uniform superpixel sizes. Obtaining a dominant label per
superpixel drastically reduces annotators' burden as it requires fewer clicks.
However, it inevitably introduces noisy annotations due to mismatches between
superpixel and ground truth segmentation. To address this issue, we further
devise a sieving mechanism that identifies and excludes potentially noisy
annotations from learning. Our experiments on both Cityscapes and PASCAL VOC
datasets demonstrate the efficacy of adaptive superpixel and sieving
mechanisms
- …