915 research outputs found

    Complete initial solutions for iterative pose estimation from planar objects

    Get PDF
    Camera pose estimation from the image of a planar object has important applications in photogrammetry and computer vision. In this paper, an efficient approach to find the initial solutions for iterative camera pose estimation using coplanar points is proposed. Starting with homography, the proposed approach provides a least-squares solution for absolute orientation, which has a relatively high accuracy and can be easily refined into one optimal pose that locates local minima of the according error function by using Gauss-Newton scheme or Lu's orthogonal iteration algorithm. In response to ambiguities that exist in pose estimation from planar objects, we propose a novel method to find initial approximation of the second pose, which is different from existing methods in its concise form and clear geometric interpretation. Thorough testing on synthetic data shows that combined with currently employed iterative optimization algorithm, the two initial solutions proposed in this paper can achieve the same accuracy and robustness as the best state-of-the-art pose estimation algorithms, while with a significant decrease in computational cost. Real experiment is also employed to demonstrate its performance

    Extrinsic Calibration and Ego-Motion Estimation for Mobile Multi-Sensor Systems

    Get PDF
    Autonomous robots and vehicles are often equipped with multiple sensors to perform vital tasks such as localization or mapping. The joint system of various sensors with different sensing modalities can often provide better localization or mapping results than individual sensor alone in terms of accuracy or completeness. However, to enable improved performance, two important challenges have to be addressed when dealing with multi-sensor systems. Firstly, how to accurately determine the spatial relationship between individual sensor on the robot? This is a vital task known as extrinsic calibration. Without this calibration information, measurements from different sensors cannot be fused. Secondly, how to combine data from multiple sensors to correct for the deficiencies of each sensor, and thus, provides better estimations? This is another important task known as data fusion. The core of this thesis is to provide answers to these two questions. We cover, in the first part of the thesis, aspects related to improving the extrinsic calibration accuracy, and present, in the second part, novel data fusion algorithms designed to address the ego-motion estimation problem using data from a laser scanner and a monocular camera. In the extrinsic calibration part, we contribute by revealing and quantifying the relative calibration accuracies of three common types of calibration methods, so as to offer an insight into choosing the best calibration method when multiple options are available. Following that, we propose an optimization approach for solving common motion-based calibration problems. By exploiting the Gauss-Helmert model, our approach is more accurate and robust than classical least squares model. In the data fusion part, we focus on camera-laser data fusion and contribute with two new ego-motion estimation algorithms that combine complementary information from a laser scanner and a monocular camera. The first algorithm utilizes camera image information to guide the laser scan-matching. It can provide accurate motion estimates and yet can work in general conditions without requiring a field-of-view overlap between the camera and laser scanner, nor an initial guess of the motion parameters. The second algorithm combines the camera and the laser scanner information in a direct way, assuming the field-of-view overlap between the sensors is substantial. By maximizing the information usage of both the sparse laser point cloud and the dense image, the second algorithm is able to achieve state-of-the-art estimation accuracy. Experimental results confirm that both algorithms offer excellent alternatives to state-of-the-art camera-laser ego-motion estimation algorithms

    A Relative Rotation between Two Overlapping UAV's Images

    Get PDF
    In this paper, we study the influence of varying baseline components on the accuracy of a relative rotation between two overlapping aerial images taken form UAV flight. The case is relevant when mosaicking UAV's aerial images by registering each individual image. Geotagged images facilitated by a navigational grade GPS receiver on board inform the camera position when taking pictures. However, these low accuracies of geographical coordinates encoded in an EXIF format are unreliable to depict baseline vector components between subsequent overlapping images. This research investigates these influences on the stability of rotation elements when the vector components are entered into a standard coplanarity condition equation to determine the relative rotation of the stereo images. Assuming a nadir looking camera on board while the UAV platform is flying at a constant height, the resulted vector directions are utilized to constraint the coplanarity equation. A detailed analysis of each variation is given. Our experiments based on real datasets confirm that the relative rotation between two successive overlapping image is practically unaffected by the accuracy of positioning method. Furthermore, the coplanarity constraint is invariant with respect to a translation along the baseline of the aerial stereo images

    Visual servo control Part I: basic approaches

    Get PDF
    This article is the first of a two-part series on the topic of visual servo control—using computer vision data in the servo loop to control the motion of a robot. In the present article, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques

    Visual Servoing

    Get PDF
    International audienceThis chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed , we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual ser-voing in robotics

    Mirror surface reconstruction under an uncalibrated camera

    Get PDF
    This paper addresses the problem of mirror surface reconstruction, and a solution based on observing the reflections of a moving reference plane on the mirror surface is proposed. Unlike previous approaches which require tedious work to calibrate the camera, our method can recover both the camera intrinsics and extrinsics together with the mirror surface from reflections of the reference plane under at least three unknown distinct poses. Our previous work has demonstrated that 3D poses of the reference plane can be registered in a common coordinate system using reflection correspondences established across images. This leads to a bunch of registered 3D lines formed from the reflection correspondences. Given these lines, we first derive an analytical solution to recover the camera projection matrix through estimating the line projection matrix. We then optimize the camera projection matrix by minimizing reprojection errors computed based on a cross-ratio formulation. The mirror surface is finally reconstructed based on the optimized cross-ratio constraint. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and accuracy of our method.postprin
    • …
    corecore