research

Automatic extrinsic calibration of camera networks based on pedestrians

Abstract

Extrinsic camera calibration is essential for any computer vision tasks in a camera network. Usually, researchers place calibration objects in the scene to calibrate the cameras. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. It is based on the analysis of pedestrian tracks without other calibration objects. Compared to the state of the art, the new method is fully automatic and robust. Our method detects human poses in the camera images and then models walking persons as vertical sticks. We propose a brute-force method to determine the pedestrian correspondences in multiple camera images. This information along with 3D estimated locations of the head and feet of the pedestrians are then used to compute the camera extrinsic matrices. We verified the robustness of the method in different camera setups and for both single pedestrian and multiple walking people. The results show that the proposed method can obtain the triangulation error of a few centimeters. Typically, it requires 40 seconds of collecting data from walking people to reach this accuracy in controlled environments and a few minutes for uncontrolled environments. As well as compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion automatically. Our proposed method could perform well in various situations such as multi-person, occlusions, or even at real intersections on the street

    Similar works