1,821 research outputs found

    Automatic extrinsic calibration of camera networks based on pedestrians

    Get PDF
    Extrinsic camera calibration is essential for any computer vision tasks in a camera network. Usually, researchers place calibration objects in the scene to calibrate the cameras. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. It is based on the analysis of pedestrian tracks without other calibration objects. Compared to the state of the art, the new method is fully automatic and robust. Our method detects human poses in the camera images and then models walking persons as vertical sticks. We propose a brute-force method to determine the pedestrian correspondences in multiple camera images. This information along with 3D estimated locations of the head and feet of the pedestrians are then used to compute the camera extrinsic matrices. We verified the robustness of the method in different camera setups and for both single pedestrian and multiple walking people. The results show that the proposed method can obtain the triangulation error of a few centimeters. Typically, it requires 40 seconds of collecting data from walking people to reach this accuracy in controlled environments and a few minutes for uncontrolled environments. As well as compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion automatically. Our proposed method could perform well in various situations such as multi-person, occlusions, or even at real intersections on the street

    Automatic multi-camera extrinsic parameter calibration based on pedestrian torsors

    Get PDF
    Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street

    Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

    Full text link
    We present a self-supervised approach to ignoring "distractors" in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each input image, which we use to train a deep convolutional network. At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching. Our approach yields metric-scale VO using only a single camera and can recover the correct egomotion even when 90% of the image is obscured by dynamic, independently moving objects. We evaluate our robust VO methods on more than 400km of driving from the Oxford RobotCar Dataset and demonstrate reduced odometry drift and significantly improved egomotion estimation in the presence of large moving vehicles in urban traffic.Comment: International Conference on Robotics and Automation (ICRA), 2018. Video summary: http://youtu.be/ebIrBn_nc-

    Continuous measurements of real-life bidirectional pedestrian flows on a wide walkway

    Full text link
    Employing partially overlapping overhead \kinectTMS sensors and automatic pedestrian tracking algorithms we recorded the crowd traffic in a rectilinear section of the main walkway of Eindhoven train station on a 24/7 basis. Beside giving access to the train platforms (it passes underneath the railways), the walkway plays an important connection role in the city. Several crowding scenarios occur during the day, including high- and low-density dynamics in uni- and bi-directional regimes. In this paper we discuss our recording technique and we illustrate preliminary data analyses. Via fundamental diagrams-like representations we report pedestrian velocities and fluxes vs. pedestrian density. Considering the density range 00 - 1.1 1.1\,ped/m2^2, we find that at densities lower than 0.8 0.8\,ped/m2^2 pedestrians in unidirectional flows walk faster than in bidirectional regimes. On the opposite, velocities and fluxes for even bidirectional flows are higher above 0.8 0.8\,ped/m2^2.Comment: 9 pages, 7 figure

    Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras

    No full text
    Despite the fact that personal privacy has become a major concern, surveillance technology is now becoming ubiquitous in modern society. This is mainly due to the increasing number of crimes as well as the essential necessity to provide secure and safer environment. Recent research studies have confirmed now the possibility of recognizing people by the way they walk i.e. gait. The aim of this research study is to investigate the use of gait for people detection as well as identification across different cameras. We present a new approach for people tracking and identification between different non-intersecting un-calibrated stationary cameras based on gait analysis. A vision-based markerless extraction method is being deployed for the derivation of gait kinematics as well as anthropometric measurements in order to produce a gait signature. The novelty of our approach is motivated by the recent research in biometrics and forensic analysis using gait. The experimental results affirmed the robustness of our approach to successfully detect walking people as well as its potency to extract gait features for different camera viewpoints achieving an identity recognition rate of 73.6 % processed for 2270 video sequences. Furthermore, experimental results confirmed the potential of the proposed method for identity tracking in real surveillance systems to recognize walking individuals across different views with an average recognition rate of 92.5 % for cross-camera matching for two different non-overlapping views.<br/
    • …
    corecore