4 research outputs found
JPPF: Multi-task Fusion for Consistent Panoptic-Part Segmentation
Part-aware panoptic segmentation is a problem of computer vision that aims to
provide a semantic understanding of the scene at multiple levels of
granularity. More precisely, semantic areas, object instances, and semantic
parts are predicted simultaneously. In this paper, we present our Joint
Panoptic Part Fusion (JPPF) that combines the three individual segmentations
effectively to obtain a panoptic-part segmentation. Two aspects are of utmost
importance for this: First, a unified model for the three problems is desired
that allows for mutually improved and consistent representation learning.
Second, balancing the combination so that it gives equal importance to all
individual results during fusion. Our proposed JPPF is parameter-free and
dynamically balances its input. The method is evaluated and compared on the
Cityscapes Panoptic Parts (CPP) and Pascal Panoptic Parts (PPP) datasets in
terms of PartPQ and Part-Whole Quality (PWQ). In extensive experiments, we
verify the importance of our fair fusion, highlight its most significant impact
for areas that can be further segmented into parts, and demonstrate the
generalization capabilities of our design without fine-tuning on 5 additional
datasets.Comment: Accepted for Springer Nature Computer Science. arXiv admin note:
substantial text overlap with arXiv:2212.0767
1st Workshop on Maritime Computer Vision (MaCVi) 2023: Challenge Results
The 1 Workshop on Maritime Computer Vision (MaCVi) 2023 focused
on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned
Surface Vehicle (USV), and organized several subchallenges in this domain: (i)
UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking,
(iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime
Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS
benchmarks. This report summarizes the main findings of the individual
subchallenges and introduces a new benchmark, called SeaDronesSee Object
Detection v2, which extends the previous benchmark by including more classes
and footage. We provide statistical and qualitative analyses, and assess trends
in the best-performing methodologies of over 130 submissions. The methods are
summarized in the appendix. The datasets, evaluation code and the leaderboard
are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.Comment: MaCVi 2023 was part of WACV 2023. This report (38 pages) discusses
the competition as part of MaCV
Attention-Guided Disentangled Feature Aggregation for Video Object Detection
Object detection is a computer vision task that involves localisation and classification of objects in an image. Video data implicitly introduces several challenges, such as blur, occlusion and defocus, making video object detection more challenging in comparison to still image object detection, which is performed on individual and independent images. This paper tackles these challenges by proposing an attention-heavy framework for video object detection that aggregates the disentangled features extracted from individual frames. The proposed framework is a two-stage object detector based on the Faster R-CNN architecture. The disentanglement head integrates scale, spatial and task-aware attention and applies it to the features extracted by the backbone network across all the frames. Subsequently, the aggregation head incorporates temporal attention and improves detection in the target frame by aggregating the features of the support frames. These include the features extracted from the disentanglement network along with the temporal features. We evaluate the proposed framework using the ImageNet VID dataset and achieve a mean Average Precision (mAP) of 49.8 and 52.5 using the backbones of ResNet-50 and ResNet-101, respectively. The improvement in performance over the individual baseline methods validates the efficacy of the proposed approach