752 research outputs found
Towards holistic scene understanding:Semantic segmentation and beyond
This dissertation addresses visual scene understanding and enhances
segmentation performance and generalization, training efficiency of networks,
and holistic understanding. First, we investigate semantic segmentation in the
context of street scenes and train semantic segmentation networks on
combinations of various datasets. In Chapter 2 we design a framework of
hierarchical classifiers over a single convolutional backbone, and train it
end-to-end on a combination of pixel-labeled datasets, improving
generalizability and the number of recognizable semantic concepts. Chapter 3
focuses on enriching semantic segmentation with weak supervision and proposes a
weakly-supervised algorithm for training with bounding box-level and
image-level supervision instead of only with per-pixel supervision. The memory
and computational load challenges that arise from simultaneous training on
multiple datasets are addressed in Chapter 4. We propose two methodologies for
selecting informative and diverse samples from datasets with weak supervision
to reduce our networks' ecological footprint without sacrificing performance.
Motivated by memory and computation efficiency requirements, in Chapter 5, we
rethink simultaneous training on heterogeneous datasets and propose a universal
semantic segmentation framework. This framework achieves consistent increases
in performance metrics and semantic knowledgeability by exploiting various
scene understanding datasets. Chapter 6 introduces the novel task of part-aware
panoptic segmentation, which extends our reasoning towards holistic scene
understanding. This task combines scene and parts-level semantics with
instance-level object detection. In conclusion, our contributions span over
convolutional network architectures, weakly-supervised learning, part and
panoptic segmentation, paving the way towards a holistic, rich, and sustainable
visual scene understanding.Comment: PhD Thesis, Eindhoven University of Technology, October 202
Panoptic Segmentation
We propose and study a task we name panoptic segmentation (PS). Panoptic
segmentation unifies the typically distinct tasks of semantic segmentation
(assign a class label to each pixel) and instance segmentation (detect and
segment each object instance). The proposed task requires generating a coherent
scene segmentation that is rich and complete, an important step toward
real-world vision systems. While early work in computer vision addressed
related image/scene parsing tasks, these are not currently popular, possibly
due to lack of appropriate metrics or associated recognition challenges. To
address this, we propose a novel panoptic quality (PQ) metric that captures
performance for all classes (stuff and things) in an interpretable and unified
manner. Using the proposed metric, we perform a rigorous study of both human
and machine performance for PS on three existing datasets, revealing
interesting insights about the task. The aim of our work is to revive the
interest of the community in a more unified view of image segmentation.Comment: accepted to CVPR 201
PanDA: Panoptic Data Augmentation
The recently proposed panoptic segmentation task presents a significant challenge of image understanding with computer vision by unifying semantic segmentation and instance segmentation tasks. In this paper we present an efficient and novel panoptic data augmentation (PanDA) method which operates exclusively in pixel space, requires no additional data or training, and is computationally cheap to implement. By retraining original state-of-the-art models on PanDA augmented datasets generated with a single frozen set of parameters, we show robust performance gains in panoptic segmentation, instance segmentation, as well as detection across models, backbones, dataset domains, and scales. Finally, the effectiveness of unrealistic-looking training images synthesized by PanDA suggest that one should rethink the need for image realism for efficient data augmentation
Weakly- and Semi-Supervised Panoptic Segmentation
We present a weakly supervised model that jointly performs both semantic- and
instance-segmentation -- a particularly relevant problem given the substantial
cost of obtaining pixel-perfect annotation for these tasks. In contrast to many
popular instance segmentation approaches based on object detectors, our method
does not predict any overlapping instances. Moreover, we are able to segment
both "thing" and "stuff" classes, and thus explain all the pixels in the image.
"Thing" classes are weakly-supervised with bounding boxes, and "stuff" with
image-level tags. We obtain state-of-the-art results on Pascal VOC, for both
full and weak supervision (which achieves about 95% of fully-supervised
performance). Furthermore, we present the first weakly-supervised results on
Cityscapes for both semantic- and instance-segmentation. Finally, we use our
weakly supervised framework to analyse the relationship between annotation
quality and predictive performance, which is of interest to dataset creators.Comment: ECCV 2018. The first two authors contributed equall
Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning
In this work, we introduce panoramic panoptic segmentation, as the most
holistic scene understanding, both in terms of Field of View (FoV) and
image-level understanding for standard camera-based input. A complete
surrounding understanding provides a maximum of information to a mobile agent.
This is essential information for any intelligent vehicle to make informed
decisions in a safety-critical dynamic environment such as real-world traffic.
In order to overcome the lack of annotated panoramic images, we propose a
framework which allows model training on standard pinhole images and transfers
the learned features to the panoramic domain in a cost-minimizing way. The
domain shift from pinhole to panoramic images is non-trivial as large objects
and surfaces are heavily distorted close to the image border regions and look
different across the two domains. Using our proposed method with dense
contrastive learning, we manage to achieve significant improvements over a
non-adapted approach. Depending on the efficient panoptic segmentation
architecture, we can improve 3.5-6.5% measured in Panoptic Quality (PQ) over
non-adapted models on our established Wild Panoramic Panoptic Segmentation
(WildPPS) dataset. Furthermore, our efficient framework does not need access to
the images of the target domain, making it a feasible domain generalization
approach suitable for a limited hardware setting. As additional contributions,
we publish WildPPS: The first panoramic panoptic image dataset to foster
progress in surrounding perception and explore a novel training procedure
combining supervised and contrastive training.Comment: Accepted to IEEE Transactions on Intelligent Transportation Systems
(T-ITS). Extended version of arXiv:2103.00868. The project is at
https://github.com/alexanderjaus/PP
- …