3 research outputs found
Sports Camera Calibration via Synthetic Data
Calibrating sports cameras is important for autonomous broadcasting and
sports analysis. Here we propose a highly automatic method for calibrating
sports cameras from a single image using synthetic data. First, we develop a
novel camera pose engine. The camera pose engine has only three significant
free parameters so that it can effectively generate a lot of camera poses and
corresponding edge (i.e, field marking) images. Then, we learn compact deep
features via a siamese network from paired edge image and camera pose and build
a feature-pose database. After that, we use a novel two-GAN (generative
adversarial network) model to detect field markings in real images. Finally, we
query an initial camera pose from the feature-pose database and refine camera
poses using truncated distance images. We evaluate our method on both synthetic
and real data. Our method not only demonstrates the robustness on the synthetic
data but also achieves the state-of-the-art accuracy on a standard soccer
dataset and very high performance on a volleyball dataset.Comment: 6 + 1 page
Estimating Metric Poses of Dynamic Objects Using Monocular Visual-Inertial Fusion
A monocular 3D object tracking system generally has only up-to-scale pose
estimation results without any prior knowledge of the tracked object. In this
paper, we propose a novel idea to recover the metric scale of an arbitrary
dynamic object by optimizing the trajectory of the objects in the world frame,
without motion assumptions. By introducing an additional constraint in the time
domain, our monocular visual-inertial tracking system can obtain continuous six
degree of freedom (6-DoF) pose estimation without scale ambiguity. Our method
requires neither fixed multi-camera nor depth sensor settings for scale
observability, instead, the IMU inside the monocular sensing suite provides
scale information for both camera itself and the tracked object. We build the
proposed system on top of our monocular visual-inertial system (VINS) to obtain
accurate state estimation of the monocular camera in the world frame. The whole
system consists of a 2D object tracker, an object region-based visual bundle
adjustment (BA), VINS and a correlation analysis-based metric scale estimator.
Experimental comparisons with ground truth demonstrate the tracking accuracy of
our 3D tracking performance while a mobile augmented reality (AR) demo shows
the feasibility of potential applications.Comment: IROS 201
Relocalization, Global Optimization and Map Merging for Monocular Visual-Inertial SLAM
The monocular visual-inertial system (VINS), which consists one camera and
one low-cost inertial measurement unit (IMU), is a popular approach to achieve
accurate 6-DOF state estimation. However, such locally accurate visual-inertial
odometry is prone to drift and cannot provide absolute pose estimation.
Leveraging history information to relocalize and correct drift has become a hot
topic. In this paper, we propose a monocular visual-inertial SLAM system, which
can relocalize camera and get the absolute pose in a previous-built map. Then
4-DOF pose graph optimization is performed to correct drifts and achieve global
consistent. The 4-DOF contains x, y, z, and yaw angle, which is the actual
drifted direction in the visual-inertial system. Furthermore, the proposed
system can reuse a map by saving and loading it in an efficient way. Current
map and previous map can be merged together by the global pose graph
optimization. We validate the accuracy of our system on public datasets and
compare against other state-of-the-art algorithms. We also evaluate the map
merging ability of our system in the large-scale outdoor environment. The
source code of map reuse is integrated into our public code, VINS-Mono.Comment: 8 page