1,267 research outputs found
LiDAR-UDA: Self-ensembling Through Time for Unsupervised LiDAR Domain Adaptation
We introduce LiDAR-UDA, a novel two-stage self-training-based Unsupervised
Domain Adaptation (UDA) method for LiDAR segmentation. Existing self-training
methods use a model trained on labeled source data to generate pseudo labels
for target data and refine the predictions via fine-tuning the network on the
pseudo labels. These methods suffer from domain shifts caused by different
LiDAR sensor configurations in the source and target domains. We propose two
techniques to reduce sensor discrepancy and improve pseudo label quality: 1)
LiDAR beam subsampling, which simulates different LiDAR scanning patterns by
randomly dropping beams; 2) cross-frame ensembling, which exploits temporal
consistency of consecutive frames to generate more reliable pseudo labels. Our
method is simple, generalizable, and does not incur any extra inference cost.
We evaluate our method on several public LiDAR datasets and show that it
outperforms the state-of-the-art methods by more than mIoU on average
for all scenarios. Code will be available at
https://github.com/JHLee0513/LiDARUDA.Comment: Accepted ICCV 2023 (Oral
Downstream Task Self-Supervised Learning for Object Recognition and Tracking
This dissertation addresses three limitations of deep learning methods in image and video understanding-based machine vision applications. Firstly, although deep convolutional neural networks (CNNs) are efficient for image recognition applications such as object detection and segmentation, they perform poorly under perspective distortions. In real-world applications, the camera perspective is a common problem that we can address by annotating large amounts of data, thus limiting the applicability of the deep learning models. Secondly, the typical approach for single-camera tracking problems is to use separate motion and appearance models, which are expensive in terms of computations and training data requirements. Finally, conventional multi-camera video understanding techniques use supervised learning algorithms to determine temporal relationships among objects. In large-scale applications, these methods are also limited by the requirement of extensive manually annotated data and computational resources.To address these limitations, we develop an uncertainty-aware self-supervised learning (SSL) technique that captures a model\u27s instance or semantic segmentation uncertainty from overhead images and guides the model to learn the impact of the new perspective on object appearance. The test-time data augmentation-based pseudo-label refinement technique continuously trains a model until convergence on new perspective images. The proposed method can be applied for both self-supervision and semi-supervision, thus increasing the effectiveness of a deep pre-trained model in new domains. Extensive experiments demonstrate the effectiveness of the SSL technique in both object detection and semantic segmentation problems. In video understanding applications, we introduce simultaneous segmentation and tracking as an unsupervised spatio-temporal latent feature clustering problem. The jointly learned multi-task features leverage the task-dependent uncertainty to generate discriminative features in multi-object videos. Experiments have shown that the proposed tracker outperforms several state-of-the-art supervised methods. Finally, we proposed an unsupervised multi-camera tracklet association (MCTA) algorithm to track multiple objects in real-time. MCTA leverages the self-supervised detector model for single-camera tracking and solves the multi-camera tracking problem using multiple pair-wise camera associations modeled as a connected graph. The graph optimization method generates a global solution for partially or fully overlapping camera networks
- …