704 research outputs found
Towards holistic scene understanding:Semantic segmentation and beyond
This dissertation addresses visual scene understanding and enhances
segmentation performance and generalization, training efficiency of networks,
and holistic understanding. First, we investigate semantic segmentation in the
context of street scenes and train semantic segmentation networks on
combinations of various datasets. In Chapter 2 we design a framework of
hierarchical classifiers over a single convolutional backbone, and train it
end-to-end on a combination of pixel-labeled datasets, improving
generalizability and the number of recognizable semantic concepts. Chapter 3
focuses on enriching semantic segmentation with weak supervision and proposes a
weakly-supervised algorithm for training with bounding box-level and
image-level supervision instead of only with per-pixel supervision. The memory
and computational load challenges that arise from simultaneous training on
multiple datasets are addressed in Chapter 4. We propose two methodologies for
selecting informative and diverse samples from datasets with weak supervision
to reduce our networks' ecological footprint without sacrificing performance.
Motivated by memory and computation efficiency requirements, in Chapter 5, we
rethink simultaneous training on heterogeneous datasets and propose a universal
semantic segmentation framework. This framework achieves consistent increases
in performance metrics and semantic knowledgeability by exploiting various
scene understanding datasets. Chapter 6 introduces the novel task of part-aware
panoptic segmentation, which extends our reasoning towards holistic scene
understanding. This task combines scene and parts-level semantics with
instance-level object detection. In conclusion, our contributions span over
convolutional network architectures, weakly-supervised learning, part and
panoptic segmentation, paving the way towards a holistic, rich, and sustainable
visual scene understanding.Comment: PhD Thesis, Eindhoven University of Technology, October 202
SFNet: Faster and Accurate Semantic Segmentation via Semantic Flow
In this paper, we focus on exploring effective methods for faster and
accurate semantic segmentation. A common practice to improve the performance is
to attain high-resolution feature maps with strong semantic representation. Two
strategies are widely used: atrous convolutions and feature pyramid fusion,
while both are either computationally intensive or ineffective. Inspired by the
Optical Flow for motion alignment between adjacent video frames, we propose a
Flow Alignment Module (FAM) to learn \textit{Semantic Flow} between feature
maps of adjacent levels and broadcast high-level features to high-resolution
features effectively and efficiently. Furthermore, integrating our FAM to a
standard feature pyramid structure exhibits superior performance over other
real-time methods, even on lightweight backbone networks, such as ResNet-18 and
DFNet. Then to further speed up the inference procedure, we also present a
novel Gated Dual Flow Alignment Module to directly align high-resolution
feature maps and low-resolution feature maps where we term the improved version
network as SFNet-Lite. Extensive experiments are conducted on several
challenging datasets, where results show the effectiveness of both SFNet and
SFNet-Lite. In particular, when using Cityscapes test set, the SFNet-Lite
series achieve 80.1 mIoU while running at 60 FPS using ResNet-18 backbone and
78.8 mIoU while running at 120 FPS using STDC backbone on RTX-3090. Moreover,
we unify four challenging driving datasets into one large dataset, which we
named Unified Driving Segmentation (UDS) dataset. It contains diverse domain
and style information. We benchmark several representative works on UDS. Both
SFNet and SFNet-Lite still achieve the best speed and accuracy trade-off on
UDS, which serves as a strong baseline in such a challenging setting. The code
and models are publicly available at https://github.com/lxtGH/SFSegNets.Comment: IJCV-2023; Extension of Previous work arXiv:2002.1012
Understanding Dark Scenes by Contrasting Multi-Modal Observations
Understanding dark scenes based on multi-modal image data is challenging, as
both the visible and auxiliary modalities provide limited semantic information
for the task. Previous methods focus on fusing the two modalities but neglect
the correlations among semantic classes when minimizing losses to align pixels
with labels, resulting in inaccurate class predictions. To address these
issues, we introduce a supervised multi-modal contrastive learning approach to
increase the semantic discriminability of the learned multi-modal feature
spaces by jointly performing cross-modal and intra-modal contrast under the
supervision of the class correlations. The cross-modal contrast encourages
same-class embeddings from across the two modalities to be closer and pushes
different-class ones apart. The intra-modal contrast forces same-class or
different-class embeddings within each modality to be together or apart. We
validate our approach on a variety of tasks that cover diverse light conditions
and image modalities. Experiments show that our approach can effectively
enhance dark scene understanding based on multi-modal images with limited
semantics by shaping semantic-discriminative feature spaces. Comparisons with
previous methods demonstrate our state-of-the-art performance. Code and
pretrained models are available at https://github.com/palmdong/SMMCL
Geometry meets semantics for semi-supervised monocular depth estimation
Depth estimation from a single image represents a very exciting challenge in
computer vision. While other image-based depth sensing techniques leverage on
the geometry between different viewpoints (e.g., stereo or structure from
motion), the lack of these cues within a single image renders ill-posed the
monocular depth estimation task. For inference, state-of-the-art
encoder-decoder architectures for monocular depth estimation rely on effective
feature representations learned at training time. For unsupervised training of
these models, geometry has been effectively exploited by suitable images
warping losses computed from views acquired by a stereo rig or a moving camera.
In this paper, we make a further step forward showing that learning semantic
information from images enables to improve effectively monocular depth
estimation as well. In particular, by leveraging on semantically labeled images
together with unsupervised signals gained by geometry through an image warping
loss, we propose a deep learning approach aimed at joint semantic segmentation
and depth estimation. Our overall learning framework is semi-supervised, as we
deploy groundtruth data only in the semantic domain. At training time, our
network learns a common feature representation for both tasks and a novel
cross-task loss function is proposed. The experimental findings show how,
jointly tackling depth prediction and semantic segmentation, allows to improve
depth estimation accuracy. In particular, on the KITTI dataset our network
outperforms state-of-the-art methods for monocular depth estimation.Comment: 16 pages, Accepted to ACCV 201
Segmentation de scènes extérieures à partir d'ensembles d'étiquettes à granularité et sémantique variables
International audienceIn this work, we present an approach that leverages multiple datasets annotated using different classes (different labelsets) to improve the classification accuracy on each individual dataset. We focus on semantic full scene labeling of outdoor scenes. To achieve our goal, we use the KITTI dataset as it illustrates very well the focus of our paper : it has been sparsely labeled by multiple research groups over the past few years but the semantics and the granularity of the labels differ from one set to another. We propose a method to train deep convolutional networks using multiple datasets with potentially inconsistent labelsets and a selective loss function to train it with all the available labeled data while being reliant to inconsistent labelings. Experiments done on all the KITTI dataset's labeled subsets show that our approach consistently improves the classification accuracy by exploiting the correlations across data-sets both at the feature level and at the label level.Ce papier présente une approche permettant d'utiliser plusieurs bases de données annotées avec différents ensembles d'étiquettes pour améliorer la précision d'un classifieur entrainé sur une tâche de segmentation sémantique de scènes extérieures. Dans ce contexte, la base de données KITTI nous fournit un cas d'utilisation particulièrement pertinent : des sous-ensembles distincts de cette base ont été annotés par plusieurs équipes en utilisant des étiquettes différentes pour chaque sous-ensemble. Notre méthode permet d'entraîner un réseau de neurones convolutionnel (CNN) en utilisant plusieurs bases de données avec des étiquettes possiblement incohérentes. Nous présentons une fonction de perte sélective pour entrainer ce réseau et plusieurs approches de fusion permettant d'exploiter les corrélations entre les différents ensembles d'étiquettes. Le réseau utilise ainsi toutes les données disponibles pour améliorer les performances de classification sur chaque ensemble. Les expériences faites sur les différents sous-ensembles de la base de données KITTI montrent comment chaque proposition contribue à améliorer le classifieur
- …