572 research outputs found
Ground-VIO: Monocular Visual-Inertial Odometry with Online Calibration of Camera-Ground Geometric Parameters
Monocular visual-inertial odometry (VIO) is a low-cost solution to provide
high-accuracy, low-drifting pose estimation. However, it has been meeting
challenges in vehicular scenarios due to limited dynamics and lack of stable
features. In this paper, we propose Ground-VIO, which utilizes ground features
and the specific camera-ground geometry to enhance monocular VIO performance in
realistic road environments. In the method, the camera-ground geometry is
modeled with vehicle-centered parameters and integrated into an
optimization-based VIO framework. These parameters could be calibrated online
and simultaneously improve the odometry accuracy by providing stable
scale-awareness. Besides, a specially designed visual front-end is developed to
stably extract and track ground features via the inverse perspective mapping
(IPM) technique. Both simulation tests and real-world experiments are conducted
to verify the effectiveness of the proposed method. The results show that our
implementation could dramatically improve monocular VIO accuracy in vehicular
scenarios, achieving comparable or even better performance than state-of-art
stereo VIO solutions. The system could also be used for the auto-calibration of
IPM which is widely used in vehicle perception. A toolkit for ground feature
processing, together with the experimental datasets, would be made open-source
(https://github.com/GREAT-WHU/gv_tools)
UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether
This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle
(UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a
flying sensor but also as a tether attachment device. Two robots are connected
with a tether, allowing the UAV to anchor the tether to a structure located at
the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the
poor traversability of the UGV by not only providing a wider range of scanning
and mapping from the air, but also by allowing the UGV to climb steep terrains
with the winding of the tether. In addition, we present an autonomous framework
for the collaborative navigation and tether attachment in an unknown
environment. The UAV employs visual inertial navigation with 3D voxel mapping
and obstacle avoidance planning. The UGV makes use of the voxel map and
generates an elevation map to execute path planning based on a traversability
analysis. Furthermore, we compared the pros and cons of possible methods for
the tether anchoring from multiple points of view. To increase the probability
of successful anchoring, we evaluated the anchoring strategy with an
experiment. Finally, the feasibility and capability of our proposed system were
demonstrated by an autonomous mission experiment in the field with an obstacle
and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on
Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1
Monocular SLAM system for MAVs aided with altitude and range measurements: a GPS-free approach
A typical navigation system for a Micro Aerial Vehicle (MAV) relies basically on GPS for position estimation. However,for several kinds of applications, the precision of the GPS is inappropriate or even its signal can be unavailable. In this context, and due to its flexibility, Monocular Simultaneous Localization and Mapping (SLAM) methods have become a good alternative for implementing visual-based navigation systems for MAVs that must operate in GPS-denied environments.
On the other hand, one of the most important challenges that arises with the use of the monocular vision is the difficulty to recover the metric scale of the world. In this work, a monocular SLAM system for MAVs is presented. In order to overcome the problem of the metric scale, a novel technique for inferring the approximate depth of visual features from an ultrasonic range-finder is developed. Additionally, the altitude of the vehicle is updated using the pressure measurements of a barometer. The proposed approach is supported by the theoretical results obtained from a nonlinear observability test.
Experiments performed with both computer simulations and real data are presented in order to validate the performance of the proposal. The results confirm the theoretical findings and show that the method is able to work with low-cost sensors.Peer ReviewedPostprint (author's final draft
๋ก๋ฒ ํญ๋ฒ์ ์ํ ์๊ฐ๋ณด์ ์์๊ด์ฑ ์ค๋๋ฉํธ๋ฆฌ
ํ์๋
ผ๋ฌธ (์์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ๊ธฐ๊ณํญ๊ณต๊ณตํ๋ถ, 2019. 2. ๋ฐ์ฐฌ๊ตญ.This master's thesis presents a direct visual odometry robust to illumination changes and a self-calibrated visual-inertial odometry for a rover localization using an IMU and a stereo camera. Most of the previous vision-based localization algorithms are vulnerable to sudden brightness changes due to strong sunlight or a variance of the exposure time, that violates Lambertian surface assumption. Meanwhile, to decrease the error accumulation of a visual odometry, an IMU can be employed to fill gaps between successive images. However, extrinsic parameters for a visual-inertial system should be computed precisely since they play an important role in making a bridge between the visual and inertial coordinate frames, spatially as well as temporally. This thesis proposes a bucketed illumination model to account for partial and global illumination changes along with a framework of a direct visual odometry for a rover localization. Furthermore, this study presents a self-calibrated visual-inertial odometry in which the time-offset and relative pose of an IMU and a stereo camera are estimated by using point feature measurements. Specifically, based on the extended Kalman filter pose estimator, the calibration parameters are augmented in the filter state. The proposed visual odometry is evaluated through the open source dataset where images are captured in a Lunar-like environment. In addition to this, we design a rover using commercially available sensors, and a field testing of the rover confirms that the self-calibrated visual-inertial odometry decreases a localization error in terms of a return position by 76.4% when compared to the visual-inertial odometry without the self-calibration.๋ณธ ๋
ผ๋ฌธ์์๋ ๋ก๋ฒ ํญ๋ฒ ์์คํ
์ ์ํด ๊ด์ฑ์ธก์ ์ฅ์น์ ์คํ
๋ ์ค ์นด๋ฉ๋ผ๋ฅผ ์ฌ์ฉํ์ฌ ๋น ๋ณํ์ ๊ฐ๊ฑดํ ์ง์ ๋ฐฉ์ ์์ ์ค๋๋ฉํธ๋ฆฌ์ ์๊ฐ ๋ณด์ ์์๊ด์ฑ ํญ๋ฒ ์๊ณ ๋ฆฌ์ฆ์ ์ ์ํ๋ค. ๊ธฐ์กด ๋๋ถ๋ถ์ ์์๊ธฐ๋ฐ ํญ๋ฒ ์๊ณ ๋ฆฌ์ฆ๋ค์ ๋จ๋ฒ์
ํ๋ฉด ๊ฐ์ ์ ์๋ฐฐํ๋ ์ผ์ธ์ ๊ฐํ ํ๋น ํน์ ์ผ์ ํ์ง ์์ ์นด๋ฉ๋ผ์ ๋
ธ์ถ ์๊ฐ์ผ๋ก ์ธํด ์์์ ๋ฐ๊ธฐ ๋ณํ์ ์ทจ์ฝํ์๋ค. ํํธ, ์์ ์ค๋๋ฉํธ๋ฆฌ์ ์ค์ฐจ ๋์ ์ ์ค์ด๊ธฐ ์ํด ๊ด์ฑ์ธก์ ์ฅ์น๋ฅผ ์ฌ์ฉํ ์ ์์ง๋ง, ์์๊ด์ฑ ์์คํ
์ ๋ํ ์ธ๋ถ ๊ต์ ๋ณ์๋ ๊ณต๊ฐ ๋ฐ ์๊ฐ์ ์ผ๋ก ์์ ๋ฐ ๊ด์ฑ ์ขํ๊ณ๋ฅผ ์ฐ๊ฒฐํ๊ธฐ ๋๋ฌธ์ ์ฌ์ ์ ์ ํํ๊ฒ ๊ณ์ฐ๋์ด์ผ ํ๋ค. ๋ณธ ๋
ผ๋ฌธ์ ๋ก๋ฒ ํญ๋ฒ์ ์ํด ์ง์ญ ๋ฐ ์ ์ญ์ ์ธ ๋น ๋ณํ๋ฅผ ์ค๋ช
ํ๋ ์ง์ ๋ฐฉ์ ์์ ์ค๋๋ฉํธ๋ฆฌ์ ๋ฒํท ๋ฐ๊ธฐ ๋ชจ๋ธ์ ์ ์ํ๋ค. ๋ํ, ๋ณธ ์ฐ๊ตฌ์์๋ ์คํธ๋ ์ค ์นด๋ฉ๋ผ์์ ์ธก์ ๋ ํน์ง์ ์ ์ด์ฉํ์ฌ ๊ด์ฑ์ธก์ ์ฅ์น์ ์นด๋ฉ๋ผ๊ฐ์ ์๊ฐ ์คํ์
๊ณผ ์๋ ์์น ๋ฐ ์์ธ๋ฅผ ์ถ์ ํ๋ ์๊ฐ ๋ณด์ ์์๊ด์ฑ ํญ๋ฒ ์๊ณ ๋ฆฌ์ฆ์ ์ ์ํ๋ค. ํนํ, ์ ์ํ๋ ์์๊ด์ฑ ์๊ณ ๋ฆฌ์ฆ์ ํ์ฅ ์นผ๋ง ํํฐ์ ๊ธฐ๋ฐํ๋ฉฐ ๊ต์ ํ๋ผ๋ฏธํฐ๋ฅผ ํํฐ์ ์ํ๋ณ์์ ํ์ฅํ์๋ค. ์ ์ํ ์ง์ ๋ฐฉ์ ์์ ์ค๋๋ฉํธ๋ฆฌ๋ ๋ฌ ์ ์ฌํ๊ฒฝ์์ ์ดฌ์๋ ์คํ์์ค ๋ฐ์ดํฐ์
์ ํตํด ๊ทธ ์ฑ๋ฅ์ ๊ฒ์ฆํ์๋ค. ๋ํ ์์ฉ ์ผ์ ๋ฐ ๋ก๋ฒ ํ๋ซํผ์ ์ด์ฉํ์ฌ ํ
์คํธ ๋ก๋ฒ๋ฅผ ์ค๊ณํ์๊ณ , ์ด๋ฅผ ํตํด ์์๊ด์ฑ ์์คํ
์ ์๊ฐ ๋ณด์ ํ ๊ฒฝ์ฐ ๊ทธ๋ ์ง ์์ ๊ฒฝ์ฐ ๋ณด๋ค ํ๊ธฐ ์์น ์ค์ฐจ(return position error)๊ฐ 76.4% ๊ฐ์๋จ์ ํ์ธํ์๋ค.Abstract
Contents
List of Tables
List of Figures
Chapter 1 Introduction
1.1 Motivation and background
1.2 Objectives and contributions
Chapter 2 Related Works
2.1 Visual odometry
2.2 Visual-inertial odometry
Chapter 3 Direct Visual Odometry at Outdoor
3.1 Direct visual odometry
3.1.1 Notations
3.1.2 Camera projection model
3.1.3 Photometric error
3.2 The proposed algorithm
3.2.1 Problem formulation
3.2.2 Bucketed illumination model
3.2.3 Adaptive prior weight
3.3 Experimental results
3.3.1 Synthetic image sequences
3.3.2 MAV datasets
3.3.3 Planetary rover datasets
Chapter 4 Self-Calibrated Visual-Inertial Odometry
4.1 State representation
4.1.1 IMU state
4.1.2 Calibration parameter state
4.2 State-propagation
4.3 Measurement-update
4.3.1 Point feature measurement
4.3.2 Measurement error modeling
4.4 Experimental results
4.4.1 Hardware setup
4.4.2 Vision front-end design
4.4.3 Rover field testing
Chapter 5 Conclusions
5.1 Conclusion and summary
5.2 Future works
Bibliography
Chapter A Derivation of Photometric Error Jacobian
๊ตญ๋ฌธ ์ด๋กMaste
StructVIO : Visual-inertial Odometry with Structural Regularity of Man-made Environments
We propose a novel visual-inertial odometry approach that adopts structural
regularity in man-made environments. Instead of using Manhattan world
assumption, we use Atlanta world model to describe such regularity. An Atlanta
world is a world that contains multiple local Manhattan worlds with different
heading directions. Each local Manhattan world is detected on-the-fly, and
their headings are gradually refined by the state estimator when new
observations are coming. With fully exploration of structural lines that
aligned with each local Manhattan worlds, our visual-inertial odometry method
become more accurate and robust, as well as much more flexible to different
kinds of complex man-made environments. Through extensive benchmark tests and
real-world tests, the results show that the proposed approach outperforms
existing visual-inertial systems in large-scale man-made environmentsComment: 15 pages,15 figure
- โฆ