1,501 research outputs found

    Integration of Absolute Orientation Measurements in the KinectFusion Reconstruction pipeline

    Full text link
    In this paper, we show how absolute orientation measurements provided by low-cost but high-fidelity IMU sensors can be integrated into the KinectFusion pipeline. We show that integration improves both runtime, robustness and quality of the 3D reconstruction. In particular, we use this orientation data to seed and regularize the ICP registration technique. We also present a technique to filter the pairs of 3D matched points based on the distribution of their distances. This filter is implemented efficiently on the GPU. Estimating the distribution of the distances helps control the number of iterations necessary for the convergence of the ICP algorithm. Finally, we show experimental results that highlight improvements in robustness, a speed-up of almost 12%, and a gain in tracking quality of 53% for the ATE metric on the Freiburg benchmark.Comment: CVPR Workshop on Visual Odometry and Computer Vision Applications Based on Location Clues 201

    Pilot Assisted Inertial Navigation System Aiding Using Bearings-Only Measurements Taken Over Time

    Get PDF
    The objective of this work is to develop an alternative INS aiding source other than the GPS, while preserving the autonomy of the integrated navigation system. It is proposed to develop a modernized method of aerial navigation using driftmeter measurements from an E/O system for ground feature tracking, and an independent altitude sensor in conjunction with the INS. The pilot will track a ground feature with the E/O system, while the aircraft is on autopilot holding constant airspeed, altitude, and heading during an INS aiding session. The ground feature measurements from the E/O system and the INS output form measurements provided to a linear KF running on the navigation computer to accomplish the INS aiding action. Aiding the INS will be periodically repeated as operationally permissible under pilot discretion. Little to no modeling error will be present when implementing the linear Kalman filter, indicating the strength of the INS aiding action will be exclusively determined by the prevailing degree of observability

    On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    Get PDF
    The ubiquitous nature of GPS has fostered its widespread integration of navigation into a variety of applications, both civilian and military. One alternative to ensure continued flight operations in GPS-denied environments is vision-aided navigation, an approach that combines visual cues from a camera with an inertial measurement unit (IMU) to estimate the navigation states of a moving body. The majority of vision-based navigation research has been conducted in the electro-optical (EO) spectrum, which experiences limited operation in certain environments. The aim of this work is to explore how such approaches extend to infrared imaging sensors. In particular, it examines the ability of medium-wave infrared (MWIR) imagery, which is capable of operating at night and with increased vision through smoke, to expand the breadth of operations that can be supported by vision-aided navigation. The experiments presented here are based on the Minor Area Motion Imagery (MAMI) dataset that recorded GPS data, inertial measurements, EO imagery, and MWIR imagery captured during flights over Wright-Patterson Air Force Base. The approach applied here combines inertial measurements with EO position estimates from the structure from motion (SfM) algorithm. Although precision timing was not available for the MWIR imagery, the EO-based results of the scene demonstrate that trajectory estimates from SfM offer a significant increase in navigation accuracy when combined with inertial data over using an IMU alone. Results also demonstrated that MWIR-based positions solutions provide a similar trajectory reconstruction to EO-based solutions for the same scenes. While the MWIR imagery and the IMU could not be combined directly, through comparison to the combined solution using EO data the conclusion here is that MWIR imagery (with its unique phenomenologies) is capable of expanding the operating envelope of vision-aided navigation

    Progress on GPS-Denied, Multi-Vehicle, Fixed-Wing Cooperative Localization

    Get PDF
    This paper first summarizes recent results of a proposed method for multiple, small, fixed-wing aircraft cooperatively localizing in GPS-denied environments. It then provides a significant future works discussion to provide a vision for the future of cooperative navigation. The goal of this work is to show that many, small, potentially-lower-cost vehicles could collaboratively localize better than a single, more-accurate, higher-cost GPS-denied system. This work is guided by a novel methodology called relative navigation, which has been developed in prior work. Initial work focused on the development and testing of a monocular, visual-inertial odometry for fixed-wing aircraft that accounts for fixed-wing flight characteristics and sensing requirements. The front-end publishes information that enables a back-end where the odometry from multiple vehicles is combined with inter-vehicle measurements and is communicated and shared between vehicles. Each vehicle is able to create a global, backend, graph-based map and optimize it as new information is gained and measurements between vehicles overconstrain the graph. These inter-vehicle measurements allow the optimization to remove accumulated drift for more accurate estimates

    λ‘œλ²„ 항법을 μœ„ν•œ μžκ°€λ³΄μ • μ˜μƒκ΄€μ„± μ˜€λ„λ©”νŠΈλ¦¬

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 기계항곡곡학뢀, 2019. 2. λ°•μ°¬κ΅­.This master's thesis presents a direct visual odometry robust to illumination changes and a self-calibrated visual-inertial odometry for a rover localization using an IMU and a stereo camera. Most of the previous vision-based localization algorithms are vulnerable to sudden brightness changes due to strong sunlight or a variance of the exposure time, that violates Lambertian surface assumption. Meanwhile, to decrease the error accumulation of a visual odometry, an IMU can be employed to fill gaps between successive images. However, extrinsic parameters for a visual-inertial system should be computed precisely since they play an important role in making a bridge between the visual and inertial coordinate frames, spatially as well as temporally. This thesis proposes a bucketed illumination model to account for partial and global illumination changes along with a framework of a direct visual odometry for a rover localization. Furthermore, this study presents a self-calibrated visual-inertial odometry in which the time-offset and relative pose of an IMU and a stereo camera are estimated by using point feature measurements. Specifically, based on the extended Kalman filter pose estimator, the calibration parameters are augmented in the filter state. The proposed visual odometry is evaluated through the open source dataset where images are captured in a Lunar-like environment. In addition to this, we design a rover using commercially available sensors, and a field testing of the rover confirms that the self-calibrated visual-inertial odometry decreases a localization error in terms of a return position by 76.4% when compared to the visual-inertial odometry without the self-calibration.λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λ‘œλ²„ 항법 μ‹œμŠ€ν…œμ„ μœ„ν•΄ κ΄€μ„±μΈ‘μ •μž₯μΉ˜μ™€ μŠ€ν…Œλ ˆμ˜€ 카메라λ₯Ό μ‚¬μš©ν•˜μ—¬ λΉ› 변화에 κ°•κ±΄ν•œ 직접 방식 μ˜μƒ μ˜€λ„λ©”νŠΈλ¦¬μ™€ μžκ°€ 보정 μ˜μƒκ΄€μ„± 항법 μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ•ˆν•œλ‹€. κΈ°μ‘΄ λŒ€λΆ€λΆ„μ˜ μ˜μƒκΈ°λ°˜ 항법 μ•Œκ³ λ¦¬μ¦˜λ“€μ€ λž¨λ²„μ…˜ ν‘œλ©΄ 가정을 μœ„λ°°ν•˜λŠ” μ•Όμ™Έμ˜ κ°•ν•œ ν–‡λΉ› ν˜Ήμ€ μΌμ •ν•˜μ§€ μ•Šμ€ μΉ΄λ©”λΌμ˜ λ…ΈμΆœ μ‹œκ°„μœΌλ‘œ 인해 μ˜μƒμ˜ 밝기 변화에 μ·¨μ•½ν•˜μ˜€λ‹€. ν•œνŽΈ, μ˜μƒ μ˜€λ„λ©”νŠΈλ¦¬μ˜ 였차 λˆ„μ μ„ 쀄이기 μœ„ν•΄ κ΄€μ„±μΈ‘μ •μž₯치λ₯Ό μ‚¬μš©ν•  수 μžˆμ§€λ§Œ, μ˜μƒκ΄€μ„± μ‹œμŠ€ν…œμ— λŒ€ν•œ μ™ΈλΆ€ ꡐ정 λ³€μˆ˜λŠ” 곡간 및 μ‹œκ°„μ μœΌλ‘œ μ˜μƒ 및 κ΄€μ„± μ’Œν‘œκ³„λ₯Ό μ—°κ²°ν•˜κΈ° λ•Œλ¬Έμ— 사전에 μ •ν™•ν•˜κ²Œ κ³„μ‚°λ˜μ–΄μ•Ό ν•œλ‹€. λ³Έ 논문은 λ‘œλ²„ 항법을 μœ„ν•΄ 지역 및 전역적인 λΉ› λ³€ν™”λ₯Ό μ„€λͺ…ν•˜λŠ” 직접 방식 μ˜μƒ μ˜€λ„λ©”νŠΈλ¦¬μ˜ 버킷 밝기 λͺ¨λΈμ„ μ œμ•ˆν•œλ‹€. λ˜ν•œ, λ³Έ μ—°κ΅¬μ—μ„œλŠ” 슀트레였 μΉ΄λ©”λΌμ—μ„œ μΈ‘μ •λœ νŠΉμ§•μ μ„ μ΄μš©ν•˜μ—¬ κ΄€μ„±μΈ‘μ •μž₯μΉ˜μ™€ μΉ΄λ©”λΌκ°„μ˜ μ‹œκ°„ μ˜€ν”„μ…‹κ³Ό μƒλŒ€ μœ„μΉ˜ 및 μžμ„Έλ₯Ό μΆ”μ •ν•˜λŠ” μžκ°€ 보정 μ˜μƒκ΄€μ„± 항법 μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ‹œν•œλ‹€. 특히, μ œμ•ˆν•˜λŠ” μ˜μƒκ΄€μ„± μ•Œκ³ λ¦¬μ¦˜μ€ ν™•μž₯ 칼만 필터에 κΈ°λ°˜ν•˜λ©° ꡐ정 νŒŒλΌλ―Έν„°λ₯Ό ν•„ν„°μ˜ μƒνƒœλ³€μˆ˜μ— ν™•μž₯ν•˜μ˜€λ‹€. μ œμ•ˆν•œ 직접방식 μ˜μƒ μ˜€λ„λ©”νŠΈλ¦¬λŠ” 달 μœ μ‚¬ν™˜κ²½μ—μ„œ 촬영된 μ˜€ν”ˆμ†ŒμŠ€ 데이터셋을 톡해 κ·Έ μ„±λŠ₯을 κ²€μ¦ν•˜μ˜€λ‹€. λ˜ν•œ μƒμš© μ„Όμ„œ 및 λ‘œλ²„ ν”Œλž«νΌμ„ μ΄μš©ν•˜μ—¬ ν…ŒμŠ€νŠΈ λ‘œλ²„λ₯Ό μ„€κ³„ν•˜μ˜€κ³ , 이λ₯Ό 톡해 μ˜μƒκ΄€μ„± μ‹œμŠ€ν…œμ„ μžκ°€ 보정 ν•  경우 그렇지 μ•Šμ€ 경우 보닀 회기 μœ„μΉ˜ 였차(return position error)κ°€ 76.4% κ°μ†Œλ¨μ„ ν™•μΈν•˜μ˜€λ‹€.Abstract Contents List of Tables List of Figures Chapter 1 Introduction 1.1 Motivation and background 1.2 Objectives and contributions Chapter 2 Related Works 2.1 Visual odometry 2.2 Visual-inertial odometry Chapter 3 Direct Visual Odometry at Outdoor 3.1 Direct visual odometry 3.1.1 Notations 3.1.2 Camera projection model 3.1.3 Photometric error 3.2 The proposed algorithm 3.2.1 Problem formulation 3.2.2 Bucketed illumination model 3.2.3 Adaptive prior weight 3.3 Experimental results 3.3.1 Synthetic image sequences 3.3.2 MAV datasets 3.3.3 Planetary rover datasets Chapter 4 Self-Calibrated Visual-Inertial Odometry 4.1 State representation 4.1.1 IMU state 4.1.2 Calibration parameter state 4.2 State-propagation 4.3 Measurement-update 4.3.1 Point feature measurement 4.3.2 Measurement error modeling 4.4 Experimental results 4.4.1 Hardware setup 4.4.2 Vision front-end design 4.4.3 Rover field testing Chapter 5 Conclusions 5.1 Conclusion and summary 5.2 Future works Bibliography Chapter A Derivation of Photometric Error Jacobian κ΅­λ¬Έ 초둝Maste
    • …
    corecore