556 research outputs found
The Oxford Multimotion Dataset: Multiple SE(3) Motions with Ground Truth
Datasets advance research by posing challenging new problems and providing
standardized methods of algorithm comparison. High-quality datasets exist for
many important problems in robotics and computer vision, including egomotion
estimation and motion/scene segmentation, but not for techniques that estimate
every motion in a scene. Metric evaluation of these multimotion estimation
techniques requires datasets consisting of multiple, complex motions that also
contain ground truth for every moving body.
The Oxford Multimotion Dataset provides a number of multimotion estimation
problems of varying complexity. It includes both complex problems that
challenge existing algorithms as well as a number of simpler problems to
support development. These include observations from both static and dynamic
sensors, a varying number of moving bodies, and a variety of different 3D
motions. It also provides a number of experiments designed to isolate specific
challenges of the multimotion problem, including rotation about the optical
axis and occlusion.
In total, the Oxford Multimotion Dataset contains over 110 minutes of
multimotion data consisting of stereo and RGB-D camera images, IMU data, and
Vicon ground-truth trajectories. The dataset culminates in a complex toy car
segment representative of many challenging real-world scenarios. This paper
describes each experiment with a focus on its relevance to the multimotion
estimation problem.Comment: 8 Pages. 8 Figures. Video available at
https://www.youtube.com/watch?v=zXaHEdiKxdA. Dataset available at
https://robotic-esp.com/datasets
Robust visual odometry using uncertainty models
In dense, urban environments, GPS by itself cannot be relied on to provide accurate positioning information. Signal reception issues (e.g. occlusion, multi-path effects) often prevent the GPS receiver from getting a positional lock, causing holes in the absolute positioning data. In order to keep assisting the driver, other sensors are required to track the vehicle motion during these periods of GPS disturbance. In this paper, we propose a novel method to use a single on-board consumer-grade camera to estimate the relative vehicle motion. The method is based on the tracking of ground plane features, taking into account the uncertainty on their backprojection as well as the uncertainty on the vehicle motion. A Hough-like parameter space vote is employed to extract motion parameters from the uncertainty models. The method is easy to calibrate and designed to be robust to outliers and bad feature quality. Preliminary testing shows good accuracy and reliability, with a positional estimate within 2 metres for a 400 metre elapsed distance. The effects of inaccurate calibration are examined using artificial datasets, suggesting a self-calibrating system may be possible in future work
WPO-Net: Windowed Pose Optimization Network for Monocular Visual Odometry Estimation.
Visual odometry is the process of estimating incremental localization of the camera in 3-dimensional space for autonomous driving. There have been new learning-based methods which do not require camera calibration and are robust to external noise. In this work, a new method that do not require camera calibration called the "windowed pose optimization network" is proposed to estimate the 6 degrees of freedom pose of a monocular camera. The architecture of the proposed network is based on supervised learning-based methods with feature encoder and pose regressor that takes multiple consecutive two grayscale image stacks at each step for training and enforces the composite pose constraints. The KITTI dataset is used to evaluate the performance of the proposed method. The proposed method yielded rotational error of 3.12 deg/100 m, and the training time is 41.32 ms, while inference time is 7.87 ms. Experiments demonstrate the competitive performance of the proposed method to other state-of-the-art related works which shows the novelty of the proposed technique
- …