1,021 research outputs found

    Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters

    Full text link
    Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201

    λ‘œλ²„ 항법을 μœ„ν•œ μžκ°€λ³΄μ • μ˜μƒκ΄€μ„± μ˜€λ„λ©”νŠΈλ¦¬

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 기계항곡곡학뢀, 2019. 2. λ°•μ°¬κ΅­.This master's thesis presents a direct visual odometry robust to illumination changes and a self-calibrated visual-inertial odometry for a rover localization using an IMU and a stereo camera. Most of the previous vision-based localization algorithms are vulnerable to sudden brightness changes due to strong sunlight or a variance of the exposure time, that violates Lambertian surface assumption. Meanwhile, to decrease the error accumulation of a visual odometry, an IMU can be employed to fill gaps between successive images. However, extrinsic parameters for a visual-inertial system should be computed precisely since they play an important role in making a bridge between the visual and inertial coordinate frames, spatially as well as temporally. This thesis proposes a bucketed illumination model to account for partial and global illumination changes along with a framework of a direct visual odometry for a rover localization. Furthermore, this study presents a self-calibrated visual-inertial odometry in which the time-offset and relative pose of an IMU and a stereo camera are estimated by using point feature measurements. Specifically, based on the extended Kalman filter pose estimator, the calibration parameters are augmented in the filter state. The proposed visual odometry is evaluated through the open source dataset where images are captured in a Lunar-like environment. In addition to this, we design a rover using commercially available sensors, and a field testing of the rover confirms that the self-calibrated visual-inertial odometry decreases a localization error in terms of a return position by 76.4% when compared to the visual-inertial odometry without the self-calibration.λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λ‘œλ²„ 항법 μ‹œμŠ€ν…œμ„ μœ„ν•΄ κ΄€μ„±μΈ‘μ •μž₯μΉ˜μ™€ μŠ€ν…Œλ ˆμ˜€ 카메라λ₯Ό μ‚¬μš©ν•˜μ—¬ λΉ› 변화에 κ°•κ±΄ν•œ 직접 방식 μ˜μƒ μ˜€λ„λ©”νŠΈλ¦¬μ™€ μžκ°€ 보정 μ˜μƒκ΄€μ„± 항법 μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ•ˆν•œλ‹€. κΈ°μ‘΄ λŒ€λΆ€λΆ„μ˜ μ˜μƒκΈ°λ°˜ 항법 μ•Œκ³ λ¦¬μ¦˜λ“€μ€ λž¨λ²„μ…˜ ν‘œλ©΄ 가정을 μœ„λ°°ν•˜λŠ” μ•Όμ™Έμ˜ κ°•ν•œ ν–‡λΉ› ν˜Ήμ€ μΌμ •ν•˜μ§€ μ•Šμ€ μΉ΄λ©”λΌμ˜ λ…ΈμΆœ μ‹œκ°„μœΌλ‘œ 인해 μ˜μƒμ˜ 밝기 변화에 μ·¨μ•½ν•˜μ˜€λ‹€. ν•œνŽΈ, μ˜μƒ μ˜€λ„λ©”νŠΈλ¦¬μ˜ 였차 λˆ„μ μ„ 쀄이기 μœ„ν•΄ κ΄€μ„±μΈ‘μ •μž₯치λ₯Ό μ‚¬μš©ν•  수 μžˆμ§€λ§Œ, μ˜μƒκ΄€μ„± μ‹œμŠ€ν…œμ— λŒ€ν•œ μ™ΈλΆ€ ꡐ정 λ³€μˆ˜λŠ” 곡간 및 μ‹œκ°„μ μœΌλ‘œ μ˜μƒ 및 κ΄€μ„± μ’Œν‘œκ³„λ₯Ό μ—°κ²°ν•˜κΈ° λ•Œλ¬Έμ— 사전에 μ •ν™•ν•˜κ²Œ κ³„μ‚°λ˜μ–΄μ•Ό ν•œλ‹€. λ³Έ 논문은 λ‘œλ²„ 항법을 μœ„ν•΄ 지역 및 전역적인 λΉ› λ³€ν™”λ₯Ό μ„€λͺ…ν•˜λŠ” 직접 방식 μ˜μƒ μ˜€λ„λ©”νŠΈλ¦¬μ˜ 버킷 밝기 λͺ¨λΈμ„ μ œμ•ˆν•œλ‹€. λ˜ν•œ, λ³Έ μ—°κ΅¬μ—μ„œλŠ” 슀트레였 μΉ΄λ©”λΌμ—μ„œ μΈ‘μ •λœ νŠΉμ§•μ μ„ μ΄μš©ν•˜μ—¬ κ΄€μ„±μΈ‘μ •μž₯μΉ˜μ™€ μΉ΄λ©”λΌκ°„μ˜ μ‹œκ°„ μ˜€ν”„μ…‹κ³Ό μƒλŒ€ μœ„μΉ˜ 및 μžμ„Έλ₯Ό μΆ”μ •ν•˜λŠ” μžκ°€ 보정 μ˜μƒκ΄€μ„± 항법 μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ‹œν•œλ‹€. 특히, μ œμ•ˆν•˜λŠ” μ˜μƒκ΄€μ„± μ•Œκ³ λ¦¬μ¦˜μ€ ν™•μž₯ 칼만 필터에 κΈ°λ°˜ν•˜λ©° ꡐ정 νŒŒλΌλ―Έν„°λ₯Ό ν•„ν„°μ˜ μƒνƒœλ³€μˆ˜μ— ν™•μž₯ν•˜μ˜€λ‹€. μ œμ•ˆν•œ 직접방식 μ˜μƒ μ˜€λ„λ©”νŠΈλ¦¬λŠ” 달 μœ μ‚¬ν™˜κ²½μ—μ„œ 촬영된 μ˜€ν”ˆμ†ŒμŠ€ 데이터셋을 톡해 κ·Έ μ„±λŠ₯을 κ²€μ¦ν•˜μ˜€λ‹€. λ˜ν•œ μƒμš© μ„Όμ„œ 및 λ‘œλ²„ ν”Œλž«νΌμ„ μ΄μš©ν•˜μ—¬ ν…ŒμŠ€νŠΈ λ‘œλ²„λ₯Ό μ„€κ³„ν•˜μ˜€κ³ , 이λ₯Ό 톡해 μ˜μƒκ΄€μ„± μ‹œμŠ€ν…œμ„ μžκ°€ 보정 ν•  경우 그렇지 μ•Šμ€ 경우 보닀 회기 μœ„μΉ˜ 였차(return position error)κ°€ 76.4% κ°μ†Œλ¨μ„ ν™•μΈν•˜μ˜€λ‹€.Abstract Contents List of Tables List of Figures Chapter 1 Introduction 1.1 Motivation and background 1.2 Objectives and contributions Chapter 2 Related Works 2.1 Visual odometry 2.2 Visual-inertial odometry Chapter 3 Direct Visual Odometry at Outdoor 3.1 Direct visual odometry 3.1.1 Notations 3.1.2 Camera projection model 3.1.3 Photometric error 3.2 The proposed algorithm 3.2.1 Problem formulation 3.2.2 Bucketed illumination model 3.2.3 Adaptive prior weight 3.3 Experimental results 3.3.1 Synthetic image sequences 3.3.2 MAV datasets 3.3.3 Planetary rover datasets Chapter 4 Self-Calibrated Visual-Inertial Odometry 4.1 State representation 4.1.1 IMU state 4.1.2 Calibration parameter state 4.2 State-propagation 4.3 Measurement-update 4.3.1 Point feature measurement 4.3.2 Measurement error modeling 4.4 Experimental results 4.4.1 Hardware setup 4.4.2 Vision front-end design 4.4.3 Rover field testing Chapter 5 Conclusions 5.1 Conclusion and summary 5.2 Future works Bibliography Chapter A Derivation of Photometric Error Jacobian κ΅­λ¬Έ 초둝Maste
    • …
    corecore