11 research outputs found

    Motion Estimation from Disparity Images

    Get PDF
    A new method for 3D rigid motion estimation from stereo is proposed in this paper. The appealing feature of this method is that it directly uses the disparity images obtained from stereo matching. We assume that the stereo rig has parallel cameras and show, in that case, the geometric and topological properties of the disparity images. Then we introduce a rigid transformation (called d-motion) that maps two disparity images of a rigidly moving object. We show how it is related to the Euclidean rigid motion and a motion estimation algorithm is derived. We show with experiments that our approach is simple and more accurate than standard approaches

    Multiview Stereo Object Reconstruction with a One-Line Search Method

    Get PDF
    published_or_final_versio

    Cross-calibration of Time-of-flight and Colour Cameras

    Get PDF
    Time-of-flight cameras provide depth information, which is complementary to the photometric appearance of the scene in ordinary images. It is desirable to merge the depth and colour information, in order to obtain a coherent scene representation. However, the individual cameras will have different viewpoints, resolutions and fields of view, which means that they must be mutually calibrated. This paper presents a geometric framework for this multi-view and multi-modal calibration problem. It is shown that three-dimensional projective transformations can be used to align depth and parallax-based representations of the scene, with or without Euclidean reconstruction. A new evaluation procedure is also developed; this allows the reprojection error to be decomposed into calibration and sensor-dependent components. The complete approach is demonstrated on a network of three time-of-flight and six colour cameras. The applications of such a system, to a range of automatic scene-interpretation problems, are discussed.Comment: 18 pages, 12 figures, 3 table

    The 3D Line Motion Matrix and Alignment of Line Reconstructions

    Full text link

    Visual Servoing of Robot Manipulators Part I : Projective Kinematics

    Get PDF
    Visual servoing of robot manipulators is a key technique where video images, i.e. the visual appearance of an object is used to control the end-effector. Most such methods use robots and cameras that have been calibrated. In this report we suggest a non-metric formulation of visual servoing that works with an uncalibrated stereo rig, i.e. in the projective space associated with it. In consequence, the robot kinematics and the robot Jacobian have to defined projectively, as well. Elementary motions of robot joints are pure rotations and pure translations. Rather than representing them by Euclidean transformations, the uncalibrated stereo rig suggests to represent them by corresponding projective transformations (homographies). They are introduced as projective rotations and projective translations, their algebraic properties and special parameterizations are thoroughly investigated- . This gives raise to the definition of a projectiv kinematic map and a projective Jacobian matrix. Unlike the classical robot Jacobian, which relates joint-velocities to end-effector velocities, we establish a direct relationship between joint-velocities and image-velocities, i.e. the velocitie- s of those images points corresponding to end-effector features. Finally, we provide a practical method to establish the projective kinematics and describe preliminary simulated experiments of stereo-based visual servoing using the non-metric model. Nevertheless, in depth analysis of projective control will be the topic of a forthcoming paper

    Finding the Collineation Between two Projective Reconstructions

    Get PDF
    The problem of #nding the collineation between two 3-D projective reconstructions has been proved to be useful for a variety of tasks such as calibration of a stereo rig and 3-D affine and/or Euclidean reconstruction. Moreover such a collineation may well be viewed as a point transfer method between two image pairs with applications to visually guided robot control. In spite of this potential, methods for properly estimating such a projective transformation have received little attention in the past. In this paper we describe linear, non-linear and robust methods for estimating this transformation. We test the numerical stability of these methods with respect to image noise and to the number of matched points. Finally we briefly describe three applications: stereo image transfer, Euclidean reconstruction, and self calibration of a stereoscopic camera pair

    Finding the collineation between two projective reconstructions

    No full text
    Theme 3 - Interaction homme-machine, images, donnees, connaissances. Projet MoviAvailable from INIST (FR), Document Supply Service, under shelf-number : 14802 E, issue : a.1998 n.3468 / INIST-CNRS - Institut de l'Information Scientifique et TechniqueSIGLEFRFranc

    Finding the collineation between two projective reconstructions

    Get PDF
    The problem of finding the collineation between two 3D projective reconstructions has been proved to be useful for a variety of tasks such as calibration of a stereo rig and 3D affine and/or Euclidean reconstruction. Moreover, such a collineation may well be viewed as a point transfer method between two image pairs with applications to visually guided robot control. Despite this potential, methods for properly estimating such a projective transformation have received little attention in the past. In this paper we describe linear, nonlinear, and robust methods for estimating this transformation. We test the numerical stability of these methods with respect to image noise, to the number of matched points, and as a function of the number of outliers. Finally, we devise a specialized technique for the case where 3D Euclidean coordinates are provided for a number of control points. c â—‹ 1999 Academic Pres

    Finding the Collineation Between two Projective Reconstructions

    No full text
    The problem of finding the collineation between two 3-D projective reconstructions has been proved to be useful for a variety of tasks such as calibration of a stereo rig and 3-D affine and/or Euclidean reconstruction. Moreover such a collineation may well be viewed as a point transfer method between two image pairs with applications to visually guided robot control. In spite of this potential, methods for properly estimating such a projective transformation have received little attention in the past. In this paper we describe linear, non-linear and robust methods for estimating this transformation. We test the numerical stability of these methods with respect to image noise, to the number of matched points, and as a function of the number of outliers. Finally, we devise a specialized technique for the case where 3-D Euclidean coordinates are provided for a number of control points. 1 Introduction and motivation Until very recently it was believed that visual tasks require some form..

    Finding the Collineation Between two Projective Reconstructions

    No full text
    The problem of finding the collineation between two 3-D projective reconstructions has been proved to be useful for a variety of tasks such as calibration of a stereo rig and 3-D affine and/or Euclidean reconstruction. Moreover such a collineation may well be viewed as a point transfer method between two image pairs with applications to visually guided robot control. In spite of this potential, methods for properly estimating such a projective transformation have received little attention in the past. In this paper we describe linear, non-linear and robust methods for estimating this transformation. We test the numerical stability of these methods with respect to image noise and to the number of matched points. Finally we briefly describe three applications: stereo image transfer, Euclidean reconstruction, and self calibration of a stereoscopic camera pair
    corecore