572 research outputs found

    Ground-VIO: Monocular Visual-Inertial Odometry with Online Calibration of Camera-Ground Geometric Parameters

    Full text link
    Monocular visual-inertial odometry (VIO) is a low-cost solution to provide high-accuracy, low-drifting pose estimation. However, it has been meeting challenges in vehicular scenarios due to limited dynamics and lack of stable features. In this paper, we propose Ground-VIO, which utilizes ground features and the specific camera-ground geometry to enhance monocular VIO performance in realistic road environments. In the method, the camera-ground geometry is modeled with vehicle-centered parameters and integrated into an optimization-based VIO framework. These parameters could be calibrated online and simultaneously improve the odometry accuracy by providing stable scale-awareness. Besides, a specially designed visual front-end is developed to stably extract and track ground features via the inverse perspective mapping (IPM) technique. Both simulation tests and real-world experiments are conducted to verify the effectiveness of the proposed method. The results show that our implementation could dramatically improve monocular VIO accuracy in vehicular scenarios, achieving comparable or even better performance than state-of-art stereo VIO solutions. The system could also be used for the auto-calibration of IPM which is widely used in vehicle perception. A toolkit for ground feature processing, together with the experimental datasets, would be made open-source (https://github.com/GREAT-WHU/gv_tools)

    UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether

    Full text link
    This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a flying sensor but also as a tether attachment device. Two robots are connected with a tether, allowing the UAV to anchor the tether to a structure located at the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the poor traversability of the UGV by not only providing a wider range of scanning and mapping from the air, but also by allowing the UGV to climb steep terrains with the winding of the tether. In addition, we present an autonomous framework for the collaborative navigation and tether attachment in an unknown environment. The UAV employs visual inertial navigation with 3D voxel mapping and obstacle avoidance planning. The UGV makes use of the voxel map and generates an elevation map to execute path planning based on a traversability analysis. Furthermore, we compared the pros and cons of possible methods for the tether anchoring from multiple points of view. To increase the probability of successful anchoring, we evaluated the anchoring strategy with an experiment. Finally, the feasibility and capability of our proposed system were demonstrated by an autonomous mission experiment in the field with an obstacle and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1

    Monocular SLAM system for MAVs aided with altitude and range measurements: a GPS-free approach

    Get PDF
    A typical navigation system for a Micro Aerial Vehicle (MAV) relies basically on GPS for position estimation. However,for several kinds of applications, the precision of the GPS is inappropriate or even its signal can be unavailable. In this context, and due to its flexibility, Monocular Simultaneous Localization and Mapping (SLAM) methods have become a good alternative for implementing visual-based navigation systems for MAVs that must operate in GPS-denied environments. On the other hand, one of the most important challenges that arises with the use of the monocular vision is the difficulty to recover the metric scale of the world. In this work, a monocular SLAM system for MAVs is presented. In order to overcome the problem of the metric scale, a novel technique for inferring the approximate depth of visual features from an ultrasonic range-finder is developed. Additionally, the altitude of the vehicle is updated using the pressure measurements of a barometer. The proposed approach is supported by the theoretical results obtained from a nonlinear observability test. Experiments performed with both computer simulations and real data are presented in order to validate the performance of the proposal. The results confirm the theoretical findings and show that the method is able to work with low-cost sensors.Peer ReviewedPostprint (author's final draft

    ๋กœ๋ฒ„ ํ•ญ๋ฒ•์„ ์œ„ํ•œ ์ž๊ฐ€๋ณด์ • ์˜์ƒ๊ด€์„ฑ ์˜ค๋„๋ฉ”ํŠธ๋ฆฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2019. 2. ๋ฐ•์ฐฌ๊ตญ.This master's thesis presents a direct visual odometry robust to illumination changes and a self-calibrated visual-inertial odometry for a rover localization using an IMU and a stereo camera. Most of the previous vision-based localization algorithms are vulnerable to sudden brightness changes due to strong sunlight or a variance of the exposure time, that violates Lambertian surface assumption. Meanwhile, to decrease the error accumulation of a visual odometry, an IMU can be employed to fill gaps between successive images. However, extrinsic parameters for a visual-inertial system should be computed precisely since they play an important role in making a bridge between the visual and inertial coordinate frames, spatially as well as temporally. This thesis proposes a bucketed illumination model to account for partial and global illumination changes along with a framework of a direct visual odometry for a rover localization. Furthermore, this study presents a self-calibrated visual-inertial odometry in which the time-offset and relative pose of an IMU and a stereo camera are estimated by using point feature measurements. Specifically, based on the extended Kalman filter pose estimator, the calibration parameters are augmented in the filter state. The proposed visual odometry is evaluated through the open source dataset where images are captured in a Lunar-like environment. In addition to this, we design a rover using commercially available sensors, and a field testing of the rover confirms that the self-calibrated visual-inertial odometry decreases a localization error in terms of a return position by 76.4% when compared to the visual-inertial odometry without the self-calibration.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋กœ๋ฒ„ ํ•ญ๋ฒ• ์‹œ์Šคํ…œ์„ ์œ„ํ•ด ๊ด€์„ฑ์ธก์ •์žฅ์น˜์™€ ์Šคํ…Œ๋ ˆ์˜ค ์นด๋ฉ”๋ผ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น› ๋ณ€ํ™”์— ๊ฐ•๊ฑดํ•œ ์ง์ ‘ ๋ฐฉ์‹ ์˜์ƒ ์˜ค๋„๋ฉ”ํŠธ๋ฆฌ์™€ ์ž๊ฐ€ ๋ณด์ • ์˜์ƒ๊ด€์„ฑ ํ•ญ๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ๊ธฐ์กด ๋Œ€๋ถ€๋ถ„์˜ ์˜์ƒ๊ธฐ๋ฐ˜ ํ•ญ๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์€ ๋žจ๋ฒ„์…˜ ํ‘œ๋ฉด ๊ฐ€์ •์„ ์œ„๋ฐฐํ•˜๋Š” ์•ผ์™ธ์˜ ๊ฐ•ํ•œ ํ–‡๋น› ํ˜น์€ ์ผ์ •ํ•˜์ง€ ์•Š์€ ์นด๋ฉ”๋ผ์˜ ๋…ธ์ถœ ์‹œ๊ฐ„์œผ๋กœ ์ธํ•ด ์˜์ƒ์˜ ๋ฐ๊ธฐ ๋ณ€ํ™”์— ์ทจ์•ฝํ•˜์˜€๋‹ค. ํ•œํŽธ, ์˜์ƒ ์˜ค๋„๋ฉ”ํŠธ๋ฆฌ์˜ ์˜ค์ฐจ ๋ˆ„์ ์„ ์ค„์ด๊ธฐ ์œ„ํ•ด ๊ด€์„ฑ์ธก์ •์žฅ์น˜๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ์˜์ƒ๊ด€์„ฑ ์‹œ์Šคํ…œ์— ๋Œ€ํ•œ ์™ธ๋ถ€ ๊ต์ • ๋ณ€์ˆ˜๋Š” ๊ณต๊ฐ„ ๋ฐ ์‹œ๊ฐ„์ ์œผ๋กœ ์˜์ƒ ๋ฐ ๊ด€์„ฑ ์ขŒํ‘œ๊ณ„๋ฅผ ์—ฐ๊ฒฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์ „์— ์ •ํ™•ํ•˜๊ฒŒ ๊ณ„์‚ฐ๋˜์–ด์•ผ ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ๋กœ๋ฒ„ ํ•ญ๋ฒ•์„ ์œ„ํ•ด ์ง€์—ญ ๋ฐ ์ „์—ญ์ ์ธ ๋น› ๋ณ€ํ™”๋ฅผ ์„ค๋ช…ํ•˜๋Š” ์ง์ ‘ ๋ฐฉ์‹ ์˜์ƒ ์˜ค๋„๋ฉ”ํŠธ๋ฆฌ์˜ ๋ฒ„ํ‚ท ๋ฐ๊ธฐ ๋ชจ๋ธ์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ŠคํŠธ๋ ˆ์˜ค ์นด๋ฉ”๋ผ์—์„œ ์ธก์ •๋œ ํŠน์ง•์ ์„ ์ด์šฉํ•˜์—ฌ ๊ด€์„ฑ์ธก์ •์žฅ์น˜์™€ ์นด๋ฉ”๋ผ๊ฐ„์˜ ์‹œ๊ฐ„ ์˜คํ”„์…‹๊ณผ ์ƒ๋Œ€ ์œ„์น˜ ๋ฐ ์ž์„ธ๋ฅผ ์ถ”์ •ํ•˜๋Š” ์ž๊ฐ€ ๋ณด์ • ์˜์ƒ๊ด€์„ฑ ํ•ญ๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์‹œํ•œ๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ์˜์ƒ๊ด€์„ฑ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ™•์žฅ ์นผ๋งŒ ํ•„ํ„ฐ์— ๊ธฐ๋ฐ˜ํ•˜๋ฉฐ ๊ต์ • ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํ•„ํ„ฐ์˜ ์ƒํƒœ๋ณ€์ˆ˜์— ํ™•์žฅํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ์ง์ ‘๋ฐฉ์‹ ์˜์ƒ ์˜ค๋„๋ฉ”ํŠธ๋ฆฌ๋Š” ๋‹ฌ ์œ ์‚ฌํ™˜๊ฒฝ์—์„œ ์ดฌ์˜๋œ ์˜คํ”ˆ์†Œ์Šค ๋ฐ์ดํ„ฐ์…‹์„ ํ†ตํ•ด ๊ทธ ์„ฑ๋Šฅ์„ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ๋˜ํ•œ ์ƒ์šฉ ์„ผ์„œ ๋ฐ ๋กœ๋ฒ„ ํ”Œ๋žซํผ์„ ์ด์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ ๋กœ๋ฒ„๋ฅผ ์„ค๊ณ„ํ•˜์˜€๊ณ , ์ด๋ฅผ ํ†ตํ•ด ์˜์ƒ๊ด€์„ฑ ์‹œ์Šคํ…œ์„ ์ž๊ฐ€ ๋ณด์ • ํ•  ๊ฒฝ์šฐ ๊ทธ๋ ‡์ง€ ์•Š์€ ๊ฒฝ์šฐ ๋ณด๋‹ค ํšŒ๊ธฐ ์œ„์น˜ ์˜ค์ฐจ(return position error)๊ฐ€ 76.4% ๊ฐ์†Œ๋จ์„ ํ™•์ธํ•˜์˜€๋‹ค.Abstract Contents List of Tables List of Figures Chapter 1 Introduction 1.1 Motivation and background 1.2 Objectives and contributions Chapter 2 Related Works 2.1 Visual odometry 2.2 Visual-inertial odometry Chapter 3 Direct Visual Odometry at Outdoor 3.1 Direct visual odometry 3.1.1 Notations 3.1.2 Camera projection model 3.1.3 Photometric error 3.2 The proposed algorithm 3.2.1 Problem formulation 3.2.2 Bucketed illumination model 3.2.3 Adaptive prior weight 3.3 Experimental results 3.3.1 Synthetic image sequences 3.3.2 MAV datasets 3.3.3 Planetary rover datasets Chapter 4 Self-Calibrated Visual-Inertial Odometry 4.1 State representation 4.1.1 IMU state 4.1.2 Calibration parameter state 4.2 State-propagation 4.3 Measurement-update 4.3.1 Point feature measurement 4.3.2 Measurement error modeling 4.4 Experimental results 4.4.1 Hardware setup 4.4.2 Vision front-end design 4.4.3 Rover field testing Chapter 5 Conclusions 5.1 Conclusion and summary 5.2 Future works Bibliography Chapter A Derivation of Photometric Error Jacobian ๊ตญ๋ฌธ ์ดˆ๋กMaste

    StructVIO : Visual-inertial Odometry with Structural Regularity of Man-made Environments

    Full text link
    We propose a novel visual-inertial odometry approach that adopts structural regularity in man-made environments. Instead of using Manhattan world assumption, we use Atlanta world model to describe such regularity. An Atlanta world is a world that contains multiple local Manhattan worlds with different heading directions. Each local Manhattan world is detected on-the-fly, and their headings are gradually refined by the state estimator when new observations are coming. With fully exploration of structural lines that aligned with each local Manhattan worlds, our visual-inertial odometry method become more accurate and robust, as well as much more flexible to different kinds of complex man-made environments. Through extensive benchmark tests and real-world tests, the results show that the proposed approach outperforms existing visual-inertial systems in large-scale man-made environmentsComment: 15 pages,15 figure
    • โ€ฆ
    corecore