1,659 research outputs found
Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation
We address the problem of semantic nighttime image segmentation and improve
the state-of-the-art, by adapting daytime models to nighttime without using
nighttime annotations. Moreover, we design a new evaluation framework to
address the substantial uncertainty of semantics in nighttime images. Our
central contributions are: 1) a curriculum framework to gradually adapt
semantic segmentation models from day to night through progressively darker
times of day, exploiting cross-time-of-day correspondences between daytime
images from a reference map and dark images to guide the label inference in the
dark domains; 2) a novel uncertainty-aware annotation and evaluation framework
and metric for semantic segmentation, including image regions beyond human
recognition capability in the evaluation in a principled fashion; 3) the Dark
Zurich dataset, comprising 2416 unlabeled nighttime and 2920 unlabeled twilight
images with correspondences to their daytime counterparts plus a set of 201
nighttime images with fine pixel-level annotations created with our protocol,
which serves as a first benchmark for our novel evaluation. Experiments show
that our map-guided curriculum adaptation significantly outperforms
state-of-the-art methods on nighttime sets both for standard metrics and our
uncertainty-aware metric. Furthermore, our uncertainty-aware evaluation reveals
that selective invalidation of predictions can improve results on data with
ambiguous content such as our benchmark and profit safety-oriented applications
involving invalid inputs.Comment: IEEE T-PAMI 202
How to Train Your Dragon: Tamed Warping Network for Semantic Video Segmentation
Real-time semantic segmentation on high-resolution videos is challenging due
to the strict requirements of speed. Recent approaches have utilized the
inter-frame continuity to reduce redundant computation by warping the feature
maps across adjacent frames, greatly speeding up the inference phase. However,
their accuracy drops significantly owing to the imprecise motion estimation and
error accumulation. In this paper, we propose to introduce a simple and
effective correction stage right after the warping stage to form a framework
named Tamed Warping Network (TWNet), aiming to improve the accuracy and
robustness of warping-based models. The experimental results on the Cityscapes
dataset show that with the correction, the accuracy (mIoU) significantly
increases from 67.3% to 71.6%, and the speed edges down from 65.5 FPS to 61.8
FPS. For non-rigid categories such as "human" and "object", the improvements of
IoU are even higher than 18 percentage points
Deep Video Color Propagation
Traditional approaches for color propagation in videos rely on some form of
matching between consecutive video frames. Using appearance descriptors, colors
are then propagated both spatially and temporally. These methods, however, are
computationally expensive and do not take advantage of semantic information of
the scene. In this work we propose a deep learning framework for color
propagation that combines a local strategy, to propagate colors frame-by-frame
ensuring temporal stability, and a global strategy, using semantics for color
propagation within a longer range. Our evaluation shows the superiority of our
strategy over existing video and image color propagation methods as well as
neural photo-realistic style transfer approaches.Comment: BMVC 201
GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and GPS data
Semantic segmentation for autonomous driving should be robust against various
in-the-wild environments. Nighttime semantic segmentation is especially
challenging due to a lack of annotated nighttime images and a large domain gap
from daytime images with sufficient annotation. In this paper, we propose a
novel GPS-based training framework for nighttime semantic segmentation. Given
GPS-aligned pairs of daytime and nighttime images, we perform cross-domain
correspondence matching to obtain pixel-level pseudo supervision. Moreover, we
conduct flow estimation between daytime video frames and apply GPS-based
scaling to acquire another pixel-level pseudo supervision. Using these pseudo
supervisions with a confidence map, we train a nighttime semantic segmentation
network without any annotation from nighttime images. Experimental results
demonstrate the effectiveness of the proposed method on several nighttime
semantic segmentation datasets. Our source code is available at
https://github.com/jimmy9704/GPS-GLASS.Comment: ICCVW 202
Depth-Assisted Semantic Segmentation, Image Enhancement and Parametric Modeling
This dissertation addresses the problem of employing 3D depth information on solving a number of traditional challenging computer vision/graphics problems. Humans have the abilities of perceiving the depth information in 3D world, which enable humans to reconstruct layouts, recognize objects and understand the geometric space and semantic meanings of the visual world. Therefore it is significant to explore how the 3D depth information can be utilized by computer vision systems to mimic such abilities of humans. This dissertation aims at employing 3D depth information to solve vision/graphics problems in the following aspects: scene understanding, image enhancements and 3D reconstruction and modeling.
In addressing scene understanding problem, we present a framework for semantic segmentation and object recognition on urban video sequence only using dense depth maps recovered from the video. Five view-independent 3D features that vary with object class are extracted from dense depth maps and used for segmenting and recognizing different object classes in street scene images. We demonstrate a scene parsing algorithm that uses only dense 3D depth information to outperform using sparse 3D or 2D appearance features.
In addressing image enhancement problem, we present a framework to overcome the imperfections of personal photographs of tourist sites using the rich information provided by large-scale internet photo collections (IPCs). By augmenting personal 2D images with 3D information reconstructed from IPCs, we address a number of traditionally challenging image enhancement techniques and achieve high-quality results using simple and robust algorithms.
In addressing 3D reconstruction and modeling problem, we focus on parametric modeling of flower petals, the most distinctive part of a plant. The complex structure, severe occlusions and wide variations make the reconstruction of their 3D models a challenging task. We overcome these challenges by combining data driven modeling techniques with domain knowledge from botany. Taking a 3D point cloud of an input flower scanned from a single view, each segmented petal is fitted with a scale-invariant morphable petal shape model, which is constructed from individually scanned 3D exemplar petals. Novel constraints based on botany studies are incorporated into the fitting process for realistically reconstructing occluded regions and maintaining correct 3D spatial relations.
The main contribution of the dissertation is in the intelligent usage of 3D depth information on solving traditional challenging vision/graphics problems. By developing some advanced algorithms either automatically or with minimum user interaction, the goal of this dissertation is to demonstrate that computed 3D depth behind the multiple images contains rich information of the visual world and therefore can be intelligently utilized to recognize/ understand semantic meanings of scenes, efficiently enhance and augment single 2D images, and reconstruct high-quality 3D models
- …