Online camera-to-ground calibration is to generate a non-rigid body
transformation between the camera and the road surface in a real-time manner.
Existing solutions utilize static calibration, suffering from environmental
variations such as tire pressure changes, vehicle loading volume variations,
and road surface diversity. Other online solutions exploit the usage of road
elements or photometric consistency between overlapping views across images,
which require continuous detection of specific targets on the road or
assistance with multiple cameras to facilitate calibration. In our work, we
propose an online monocular camera-to-ground calibration solution that does not
utilize any specific targets while driving. We perform a coarse-to-fine
approach for ground feature extraction through wheel odometry and estimate the
camera-to-ground calibration parameters through a sliding-window-based factor
graph optimization. Considering the non-rigid transformation of
camera-to-ground while driving, we provide metrics to quantify calibration
performance and stopping criteria to report/broadcast our satisfying
calibration results. Extensive experiments using real-world data demonstrate
that our algorithm is effective and outperforms state-of-the-art techniques