1,021 research outputs found
Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters
Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more
cameras are mounted on actuated mechanisms such as a gimbal. Existing methods
for DCC calibration rely on joint angle measurements to resolve the
time-varying transformation between the dynamic and static camera. This
information is usually provided by motor encoders, however, joint angle
measurements are not always readily available on off-the-shelf mechanisms. In
this paper, we present an encoderless approach for DCC calibration which
simultaneously estimates the kinematic parameters of the transformation chain
as well as the unknown joint angles. We also demonstrate the integration of an
encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show
the extensions required in order to perform simultaneous online estimation of
the joint angles and vehicle localization state. The proposed calibration
approach is validated both in simulation and on a physical DCC composed of a
2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the
calibrated mechanism integrated into the OKVIS VIO package, and demonstrate
successful online joint angle estimation while maintaining localization
accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201
λ‘λ² νλ²μ μν μκ°λ³΄μ μμκ΄μ± μ€λλ©νΈλ¦¬
νμλ
Όλ¬Έ (μμ¬)-- μμΈλνκ΅ λνμ : 곡과λν κΈ°κ³ν곡곡νλΆ, 2019. 2. λ°μ°¬κ΅.This master's thesis presents a direct visual odometry robust to illumination changes and a self-calibrated visual-inertial odometry for a rover localization using an IMU and a stereo camera. Most of the previous vision-based localization algorithms are vulnerable to sudden brightness changes due to strong sunlight or a variance of the exposure time, that violates Lambertian surface assumption. Meanwhile, to decrease the error accumulation of a visual odometry, an IMU can be employed to fill gaps between successive images. However, extrinsic parameters for a visual-inertial system should be computed precisely since they play an important role in making a bridge between the visual and inertial coordinate frames, spatially as well as temporally. This thesis proposes a bucketed illumination model to account for partial and global illumination changes along with a framework of a direct visual odometry for a rover localization. Furthermore, this study presents a self-calibrated visual-inertial odometry in which the time-offset and relative pose of an IMU and a stereo camera are estimated by using point feature measurements. Specifically, based on the extended Kalman filter pose estimator, the calibration parameters are augmented in the filter state. The proposed visual odometry is evaluated through the open source dataset where images are captured in a Lunar-like environment. In addition to this, we design a rover using commercially available sensors, and a field testing of the rover confirms that the self-calibrated visual-inertial odometry decreases a localization error in terms of a return position by 76.4% when compared to the visual-inertial odometry without the self-calibration.λ³Έ λ
Όλ¬Έμμλ λ‘λ² νλ² μμ€ν
μ μν΄ κ΄μ±μΈ‘μ μ₯μΉμ μ€ν
λ μ€ μΉ΄λ©λΌλ₯Ό μ¬μ©νμ¬ λΉ λ³νμ κ°κ±΄ν μ§μ λ°©μ μμ μ€λλ©νΈλ¦¬μ μκ° λ³΄μ μμκ΄μ± νλ² μκ³ λ¦¬μ¦μ μ μνλ€. κΈ°μ‘΄ λλΆλΆμ μμκΈ°λ° νλ² μκ³ λ¦¬μ¦λ€μ λ¨λ²μ
νλ©΄ κ°μ μ μλ°°νλ μΌμΈμ κ°ν νλΉ νΉμ μΌμ νμ§ μμ μΉ΄λ©λΌμ λ
ΈμΆ μκ°μΌλ‘ μΈν΄ μμμ λ°κΈ° λ³νμ μ·¨μ½νμλ€. ννΈ, μμ μ€λλ©νΈλ¦¬μ μ€μ°¨ λμ μ μ€μ΄κΈ° μν΄ κ΄μ±μΈ‘μ μ₯μΉλ₯Ό μ¬μ©ν μ μμ§λ§, μμκ΄μ± μμ€ν
μ λν μΈλΆ κ΅μ λ³μλ κ³΅κ° λ° μκ°μ μΌλ‘ μμ λ° κ΄μ± μ’νκ³λ₯Ό μ°κ²°νκΈ° λλ¬Έμ μ¬μ μ μ ννκ² κ³μ°λμ΄μΌ νλ€. λ³Έ λ
Όλ¬Έμ λ‘λ² νλ²μ μν΄ μ§μ λ° μ μμ μΈ λΉ λ³νλ₯Ό μ€λͺ
νλ μ§μ λ°©μ μμ μ€λλ©νΈλ¦¬μ λ²ν· λ°κΈ° λͺ¨λΈμ μ μνλ€. λν, λ³Έ μ°κ΅¬μμλ μ€νΈλ μ€ μΉ΄λ©λΌμμ μΈ‘μ λ νΉμ§μ μ μ΄μ©νμ¬ κ΄μ±μΈ‘μ μ₯μΉμ μΉ΄λ©λΌκ°μ μκ° μ€νμ
κ³Ό μλ μμΉ λ° μμΈλ₯Ό μΆμ νλ μκ° λ³΄μ μμκ΄μ± νλ² μκ³ λ¦¬μ¦μ μ μνλ€. νΉν, μ μνλ μμκ΄μ± μκ³ λ¦¬μ¦μ νμ₯ μΉΌλ§ νν°μ κΈ°λ°νλ©° κ΅μ νλΌλ―Έν°λ₯Ό νν°μ μνλ³μμ νμ₯νμλ€. μ μν μ§μ λ°©μ μμ μ€λλ©νΈλ¦¬λ λ¬ μ μ¬νκ²½μμ 촬μλ μ€νμμ€ λ°μ΄ν°μ
μ ν΅ν΄ κ·Έ μ±λ₯μ κ²μ¦νμλ€. λν μμ© μΌμ λ° λ‘λ² νλ«νΌμ μ΄μ©νμ¬ ν
μ€νΈ λ‘λ²λ₯Ό μ€κ³νμκ³ , μ΄λ₯Ό ν΅ν΄ μμκ΄μ± μμ€ν
μ μκ° λ³΄μ ν κ²½μ° κ·Έλ μ§ μμ κ²½μ° λ³΄λ€ νκΈ° μμΉ μ€μ°¨(return position error)κ° 76.4% κ°μλ¨μ νμΈνμλ€.Abstract
Contents
List of Tables
List of Figures
Chapter 1 Introduction
1.1 Motivation and background
1.2 Objectives and contributions
Chapter 2 Related Works
2.1 Visual odometry
2.2 Visual-inertial odometry
Chapter 3 Direct Visual Odometry at Outdoor
3.1 Direct visual odometry
3.1.1 Notations
3.1.2 Camera projection model
3.1.3 Photometric error
3.2 The proposed algorithm
3.2.1 Problem formulation
3.2.2 Bucketed illumination model
3.2.3 Adaptive prior weight
3.3 Experimental results
3.3.1 Synthetic image sequences
3.3.2 MAV datasets
3.3.3 Planetary rover datasets
Chapter 4 Self-Calibrated Visual-Inertial Odometry
4.1 State representation
4.1.1 IMU state
4.1.2 Calibration parameter state
4.2 State-propagation
4.3 Measurement-update
4.3.1 Point feature measurement
4.3.2 Measurement error modeling
4.4 Experimental results
4.4.1 Hardware setup
4.4.2 Vision front-end design
4.4.3 Rover field testing
Chapter 5 Conclusions
5.1 Conclusion and summary
5.2 Future works
Bibliography
Chapter A Derivation of Photometric Error Jacobian
κ΅λ¬Έ μ΄λ‘Maste
- β¦