3 research outputs found

    IMU-camera Self-Calibration Using Planar Mirror Reflection

    No full text
    In this paper, we first look at the problem of estimating the transformation between an inertial measurement unit (IMU) and a calibrated camera, based on images of planar mirror reflections (IPMR) of arbitrary feature points with unknown positions. Assuming that only the reflection of the feature points are observable by the camera, the IMU-camera calibration parameters and the position of the feature points in the camera frame are estimated using the Sigma-Point Kalman filter framework. In the next step, we consider the case of estimating varying camera intrinsic parameters using the estimated static parameters from the previous stage. Therefore, the estimated parameters are used as initial values in the state space model of the system to estimate the camera intrinsic parameters together with the rest of the parameters. The proposed method does not rely on using a fixed calibration pattern whose feature points' positions are known relative to the navigation frame. Additionally, the motion of the camera, which is mounted on the IMU, is not limited to be planar with respect to the mirror. Instead, the reflection of the feature points with unknown positions in the camera body frame are tracked over time. Simulation results show subcentimeter and subdegree accuracy for both IMU-camera translation and rotation parameters as well as submillimeter and subpixel accuracy for the position of the feature points and camera intrinsic parameters, respectively.© 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC 20111123</p

    Vision based estimation, localization, and mapping for autonomous vehicles

    Get PDF
    In this dissertation, we focus on developing simultaneous localization and mapping (SLAM) algorithms with a robot-centric estimation framework primarily using monocular vision sensors. A primary contribution of this work is to use a robot-centric mapping framework concurrently with a world-centric localization method. We exploit the differential equation of motion of the normalized pixel coordinates of each point feature in the robot body frame. Another contribution of our work is to exploit a multiple-view geometry formulation with initial and current view projection of point features. We extract the features from objects surrounding the river and their reflections. The correspondences of the features are used along with the attitude and altitude information of the robot. We demonstrate that the observability of the estimation system is improved by applying our robot-centric mapping framework and multiple-view measurements. Using the robot-centric mapping framework and multiple-view measurements including reflection of features, we present a vision based localization and mapping algorithm that we developed for an unmanned aerial vehicle (UAV) flying in a riverine environment. Our algorithm estimates the 3D positions of point features along a river and the pose of the UAV. Our UAV is equipped with a lightweight monocular camera, an inertial measurement unit (IMU), a magnetometer, an altimeter, and an onboard computer. To our knowledge, we report the first result that exploits the reflections of features in a riverine environment for localization and mapping. We also present an omnidirectional vision based localization and mapping system for a lawn mowing robot. Our algorithm can detect whether the robotic mower is contained in a permitted area. Our robotic mower is modified with an omnidirectional camera, an IMU, a magnetometer, and a vehicle speed sensor. Here, we also exploit the robot-centric mapping framework. The estimator in our system generates a 3D point based map with landmarks. Concurrently, the estimator defines a boundary of the mowing area by using the estimated trajectory of the mower. The estimated boundary and the landmark map are provided for the estimation of the mowing location and for the containment detection. First, we derive a nonlinear observer with contraction analysis and pseudo-measurements of the depth of each landmark to prevent the map estimator from diverging. Of particular interest for this work is ensuring that the estimator for localization and mapping will not fail due to the nonlinearity of the system model. For batch estimation, we design a hybrid extended Kalman smoother for our localization and robot-centric mapping model. Finally, we present a single camera based SLAM algorithm using a convex optimization based nonlinear estimator. We validate the effectiveness of our algorithms through numerical simulations and outdoor experiments
    corecore