49,672 research outputs found
Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation
We address the problem of semantic nighttime image segmentation and improve
the state-of-the-art, by adapting daytime models to nighttime without using
nighttime annotations. Moreover, we design a new evaluation framework to
address the substantial uncertainty of semantics in nighttime images. Our
central contributions are: 1) a curriculum framework to gradually adapt
semantic segmentation models from day to night through progressively darker
times of day, exploiting cross-time-of-day correspondences between daytime
images from a reference map and dark images to guide the label inference in the
dark domains; 2) a novel uncertainty-aware annotation and evaluation framework
and metric for semantic segmentation, including image regions beyond human
recognition capability in the evaluation in a principled fashion; 3) the Dark
Zurich dataset, comprising 2416 unlabeled nighttime and 2920 unlabeled twilight
images with correspondences to their daytime counterparts plus a set of 201
nighttime images with fine pixel-level annotations created with our protocol,
which serves as a first benchmark for our novel evaluation. Experiments show
that our map-guided curriculum adaptation significantly outperforms
state-of-the-art methods on nighttime sets both for standard metrics and our
uncertainty-aware metric. Furthermore, our uncertainty-aware evaluation reveals
that selective invalidation of predictions can improve results on data with
ambiguous content such as our benchmark and profit safety-oriented applications
involving invalid inputs.Comment: IEEE T-PAMI 202
Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation
Most progress in semantic segmentation reports on daytime images taken under
favorable illumination conditions. We instead address the problem of semantic
segmentation of nighttime images and improve the state-of-the-art, by adapting
daytime models to nighttime without using nighttime annotations. Moreover, we
design a new evaluation framework to address the substantial uncertainty of
semantics in nighttime images. Our central contributions are: 1) a curriculum
framework to gradually adapt semantic segmentation models from day to night via
labeled synthetic images and unlabeled real images, both for progressively
darker times of day, which exploits cross-time-of-day correspondences for the
real images to guide the inference of their labels; 2) a novel
uncertainty-aware annotation and evaluation framework and metric for semantic
segmentation, designed for adverse conditions and including image regions
beyond human recognition capability in the evaluation in a principled fashion;
3) the Dark Zurich dataset, which comprises 2416 unlabeled nighttime and 2920
unlabeled twilight images with correspondences to their daytime counterparts
plus a set of 151 nighttime images with fine pixel-level annotations created
with our protocol, which serves as a first benchmark to perform our novel
evaluation. Experiments show that our guided curriculum adaptation
significantly outperforms state-of-the-art methods on real nighttime sets both
for standard metrics and our uncertainty-aware metric. Furthermore, our
uncertainty-aware evaluation reveals that selective invalidation of predictions
can lead to better results on data with ambiguous content such as our nighttime
benchmark and profit safety-oriented applications which involve invalid inputs.Comment: ICCV 2019 camera-read
How to collect high quality segmentations: use human or computer drawn object boundaries?
High quality segmentations must be captured consistently for applications such as biomedical image analysis. While human drawn segmentations are often collected because they provide a consistent level of quality, computer drawn segmentations can be collected efficiently and inexpensively. In this paper, we examine how to leverage available human and computer resources to consistently create high quality segmentations. We propose a quality control methodology. We demonstrate how to apply this approach using crowdsourced and domain expert votes for
the "best" segmentation from a collection of human and computer drawn segmentations for 70 objects from a public dataset and 274 objects from biomedical images. We publicly share the library of biomedical images which includes 1,879 manual annotations of the boundaries of 274 objects. We found for the 344 objects that no single segmentation source was preferred and that human annotations are not always preferred over computer annotations.
These results motivated us to examine the traditional approach to evaluate segmentation algorithms, which involves comparing the segmentations produced by the algorithms to manual annotations on benchmark datasets. We found that algorithm benchmarking results change when the comparison is made to consensus-voted segmentations. Our results
led us to suggest a new segmentation approach that uses machine learning to predict the optimal segmentation source and a modified segmentation evaluation approach.National Science Foundation (IIS-0910908
Audio-Visual Segmentation
We propose to explore a new problem called audio-visual segmentation (AVS),
in which the goal is to output a pixel-level map of the object(s) that produce
sound at the time of the image frame. To facilitate this research, we construct
the first audio-visual segmentation benchmark (AVSBench), providing pixel-wise
annotations for the sounding objects in audible videos. Two settings are
studied with this benchmark: 1) semi-supervised audio-visual segmentation with
a single sound source and 2) fully-supervised audio-visual segmentation with
multiple sound sources. To deal with the AVS problem, we propose a novel method
that uses a temporal pixel-wise audio-visual interaction module to inject audio
semantics as guidance for the visual segmentation process. We also design a
regularization loss to encourage the audio-visual mapping during training.
Quantitative and qualitative experiments on the AVSBench compare our approach
to several existing methods from related tasks, demonstrating that the proposed
method is promising for building a bridge between the audio and pixel-wise
visual semantics. Code is available at https://github.com/OpenNLPLab/AVSBench.Comment: ECCV 2022; Correct the equation (3) and update the notation of the
evaluation metrics in the last arxiv version; Code is available at
https://github.com/OpenNLPLab/AVSBenc
FACET: Fairness in Computer Vision Evaluation Benchmark
Computer vision models have known performance disparities across attributes
such as gender and skin tone. This means during tasks such as classification
and detection, model performance differs for certain classes based on the
demographics of the people in the image. These disparities have been shown to
exist, but until now there has not been a unified approach to measure these
differences for common use-cases of computer vision models. We present a new
benchmark named FACET (FAirness in Computer Vision EvaluaTion), a large,
publicly available evaluation set of 32k images for some of the most common
vision tasks - image classification, object detection and segmentation. For
every image in FACET, we hired expert reviewers to manually annotate
person-related attributes such as perceived skin tone and hair type, manually
draw bounding boxes and label fine-grained person-related classes such as disk
jockey or guitarist. In addition, we use FACET to benchmark state-of-the-art
vision models and present a deeper understanding of potential performance
disparities and challenges across sensitive demographic attributes. With the
exhaustive annotations collected, we probe models using single demographics
attributes as well as multiple attributes using an intersectional approach
(e.g. hair color and perceived skin tone). Our results show that
classification, detection, segmentation, and visual grounding models exhibit
performance disparities across demographic attributes and intersections of
attributes. These harms suggest that not all people represented in datasets
receive fair and equitable treatment in these vision tasks. We hope current and
future results using our benchmark will contribute to fairer, more robust
vision models. FACET is available publicly at https://facet.metademolab.com
- …