3 research outputs found

    On Calibration of a Low-Cost Time-of-Flight Camera

    Full text link
    Abstract. Time-of-flight (ToF) cameras are becoming more and more popular in computer vision. In many applications 3D information de-livered by a ToF camera is used, and it is very important to know the camera’s extrinsic and intrinsic parameters, as well as precise depth in-formation. A straightforward algorithm to calibrate a ToF camera is to use a standard color camera calibration procedure [12], on the amplitude images. However, depth information delivered by ToF cameras is known to contain complex bias due to several error sources [6]. Additionally, it is desirable in many cases to determine the pose of the ToF camera relative to the other sensors used. In this work, we propose a method for joint color and ToF camera cali-bration, that determines extrinsic and intrinsic camera parameters and corrects depth bias. The calibration procedure requires a standard cali-bration board and around 20-30 images, as in case of a single color camera calibration. We evaluate the calibration quality in several experiments

    Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras

    Get PDF
    Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications (e.g., highly accurate 3D environment reconstruction and mapping, high precision object recognition and localization, ...). In this paper, we propose a human-friendly, reliable and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: it is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the ROS robotics framework. We report detailed experimental validations and performance comparisons to support our statements

    Hand pose recognition using a consumer depth camera

    Get PDF
    [no abstract
    corecore