3,833 research outputs found

    Relative Pose Estimation Algorithm with Gyroscope Sensor

    Get PDF
    This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion) for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1) Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2) Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm

    Indoor localization of a mobile robot using sensor fusion : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics with Honours at Massey University, Wellington, New Zealand

    Get PDF
    Reliable indoor navigation of mobile robots has been a popular research topic in recent years. GPS systems used for outdoor mobile robot navigation can not be used indoor (warehouse, hospital or other buildings) because it requires an unobstructed view of the sky. Therefore a specially designed indoor localization system for mobile robot is needed. This project aims to develop a reliable position and heading angle estimator for real time indoor localization of mobile robots. Two different techniques have been developed and each consisted of three different sensor modules based on infrared sensing, calibrated odometry and calibrated gyroscope. Integration of these three sensor modules is achieved by applying the real time Kalman filter which provides filtered and reliable information of a mobile robot's current location and orientation relative to its environment. Extensive experimental results are provided to demonstrate its improvement over conventional methods like dead reckoning. In addition, a control strategy is developed to control the mobile robot to move along the planned trajectory. The techniques developed in this project have potentials for the application for mobile robots in medical service, health care, surveillances, search and rescue in indoor environments

    Accurate position tracking with a single UWB anchor

    Full text link
    Accurate localization and tracking are a fundamental requirement for robotic applications. Localization systems like GPS, optical tracking, simultaneous localization and mapping (SLAM) are used for daily life activities, research, and commercial applications. Ultra-wideband (UWB) technology provides another venue to accurately locate devices both indoors and outdoors. In this paper, we study a localization solution with a single UWB anchor, instead of the traditional multi-anchor setup. Besides the challenge of a single UWB ranging source, the only other sensor we require is a low-cost 9 DoF inertial measurement unit (IMU). Under such a configuration, we propose continuous monitoring of UWB range changes to estimate the robot speed when moving on a line. Combining speed estimation with orientation estimation from the IMU sensor, the system becomes temporally observable. We use an Extended Kalman Filter (EKF) to estimate the pose of a robot. With our solution, we can effectively correct the accumulated error and maintain accurate tracking of a moving robot.Comment: Accepted by ICRA202

    Attention and Anticipation in Fast Visual-Inertial Navigation

    Get PDF
    We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of visual-inertial navigation? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate visual-inertial navigation while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
    • …
    corecore