1,501 research outputs found
Integration of Absolute Orientation Measurements in the KinectFusion Reconstruction pipeline
In this paper, we show how absolute orientation measurements provided by
low-cost but high-fidelity IMU sensors can be integrated into the KinectFusion
pipeline. We show that integration improves both runtime, robustness and
quality of the 3D reconstruction. In particular, we use this orientation data
to seed and regularize the ICP registration technique. We also present a
technique to filter the pairs of 3D matched points based on the distribution of
their distances. This filter is implemented efficiently on the GPU. Estimating
the distribution of the distances helps control the number of iterations
necessary for the convergence of the ICP algorithm. Finally, we show
experimental results that highlight improvements in robustness, a speed-up of
almost 12%, and a gain in tracking quality of 53% for the ATE metric on the
Freiburg benchmark.Comment: CVPR Workshop on Visual Odometry and Computer Vision Applications
Based on Location Clues 201
Pilot Assisted Inertial Navigation System Aiding Using Bearings-Only Measurements Taken Over Time
The objective of this work is to develop an alternative INS aiding source other than the GPS, while preserving the autonomy of the integrated navigation system. It is proposed to develop a modernized method of aerial navigation using driftmeter measurements from an E/O system for ground feature tracking, and an independent altitude sensor in conjunction with the INS. The pilot will track a ground feature with the E/O system, while the aircraft is on autopilot holding constant airspeed, altitude, and heading during an INS aiding session. The ground feature measurements from the E/O system and the INS output form measurements provided to a linear KF running on the navigation computer to accomplish the INS aiding action. Aiding the INS will be periodically repeated as operationally permissible under pilot discretion. Little to no modeling error will be present when implementing the linear Kalman filter, indicating the strength of the INS aiding action will be exclusively determined by the prevailing degree of observability
On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation
The ubiquitous nature of GPS has fostered its widespread integration of navigation into a variety of applications, both civilian and military. One alternative to ensure continued flight operations in GPS-denied environments is vision-aided navigation, an approach that combines visual cues from a camera with an inertial measurement unit (IMU) to estimate the navigation states of a moving body. The majority of vision-based navigation research has been conducted in the electro-optical (EO) spectrum, which experiences limited operation in certain environments. The aim of this work is to explore how such approaches extend to infrared imaging sensors. In particular, it examines the ability of medium-wave infrared (MWIR) imagery, which is capable of operating at night and with increased vision through smoke, to expand the breadth of operations that can be supported by vision-aided navigation. The experiments presented here are based on the Minor Area Motion Imagery (MAMI) dataset that recorded GPS data, inertial measurements, EO imagery, and MWIR imagery captured during flights over Wright-Patterson Air Force Base. The approach applied here combines inertial measurements with EO position estimates from the structure from motion (SfM) algorithm. Although precision timing was not available for the MWIR imagery, the EO-based results of the scene demonstrate that trajectory estimates from SfM offer a significant increase in navigation accuracy when combined with inertial data over using an IMU alone. Results also demonstrated that MWIR-based positions solutions provide a similar trajectory reconstruction to EO-based solutions for the same scenes. While the MWIR imagery and the IMU could not be combined directly, through comparison to the combined solution using EO data the conclusion here is that MWIR imagery (with its unique phenomenologies) is capable of expanding the operating envelope of vision-aided navigation
Progress on GPS-Denied, Multi-Vehicle, Fixed-Wing Cooperative Localization
This paper first summarizes recent results of a proposed method for multiple, small, fixed-wing aircraft cooperatively localizing in GPS-denied environments. It then provides a significant future works discussion to provide a vision for the future of cooperative navigation. The goal of this work is to show that many, small, potentially-lower-cost vehicles could collaboratively localize better than a single, more-accurate, higher-cost GPS-denied system. This work is guided by a novel methodology called relative navigation, which has been developed in prior work. Initial work focused on the development and testing of a monocular, visual-inertial odometry for fixed-wing aircraft that accounts for fixed-wing flight characteristics and sensing requirements. The front-end publishes information that enables a back-end where the odometry from multiple vehicles is combined with inter-vehicle measurements and is communicated and shared between vehicles. Each vehicle is able to create a global, backend, graph-based map and optimize it as new information is gained and measurements between vehicles overconstrain the graph. These inter-vehicle measurements allow the optimization to remove accumulated drift for more accurate estimates
λ‘λ² νλ²μ μν μκ°λ³΄μ μμκ΄μ± μ€λλ©νΈλ¦¬
νμλ
Όλ¬Έ (μμ¬)-- μμΈλνκ΅ λνμ : 곡과λν κΈ°κ³ν곡곡νλΆ, 2019. 2. λ°μ°¬κ΅.This master's thesis presents a direct visual odometry robust to illumination changes and a self-calibrated visual-inertial odometry for a rover localization using an IMU and a stereo camera. Most of the previous vision-based localization algorithms are vulnerable to sudden brightness changes due to strong sunlight or a variance of the exposure time, that violates Lambertian surface assumption. Meanwhile, to decrease the error accumulation of a visual odometry, an IMU can be employed to fill gaps between successive images. However, extrinsic parameters for a visual-inertial system should be computed precisely since they play an important role in making a bridge between the visual and inertial coordinate frames, spatially as well as temporally. This thesis proposes a bucketed illumination model to account for partial and global illumination changes along with a framework of a direct visual odometry for a rover localization. Furthermore, this study presents a self-calibrated visual-inertial odometry in which the time-offset and relative pose of an IMU and a stereo camera are estimated by using point feature measurements. Specifically, based on the extended Kalman filter pose estimator, the calibration parameters are augmented in the filter state. The proposed visual odometry is evaluated through the open source dataset where images are captured in a Lunar-like environment. In addition to this, we design a rover using commercially available sensors, and a field testing of the rover confirms that the self-calibrated visual-inertial odometry decreases a localization error in terms of a return position by 76.4% when compared to the visual-inertial odometry without the self-calibration.λ³Έ λ
Όλ¬Έμμλ λ‘λ² νλ² μμ€ν
μ μν΄ κ΄μ±μΈ‘μ μ₯μΉμ μ€ν
λ μ€ μΉ΄λ©λΌλ₯Ό μ¬μ©νμ¬ λΉ λ³νμ κ°κ±΄ν μ§μ λ°©μ μμ μ€λλ©νΈλ¦¬μ μκ° λ³΄μ μμκ΄μ± νλ² μκ³ λ¦¬μ¦μ μ μνλ€. κΈ°μ‘΄ λλΆλΆμ μμκΈ°λ° νλ² μκ³ λ¦¬μ¦λ€μ λ¨λ²μ
νλ©΄ κ°μ μ μλ°°νλ μΌμΈμ κ°ν νλΉ νΉμ μΌμ νμ§ μμ μΉ΄λ©λΌμ λ
ΈμΆ μκ°μΌλ‘ μΈν΄ μμμ λ°κΈ° λ³νμ μ·¨μ½νμλ€. ννΈ, μμ μ€λλ©νΈλ¦¬μ μ€μ°¨ λμ μ μ€μ΄κΈ° μν΄ κ΄μ±μΈ‘μ μ₯μΉλ₯Ό μ¬μ©ν μ μμ§λ§, μμκ΄μ± μμ€ν
μ λν μΈλΆ κ΅μ λ³μλ κ³΅κ° λ° μκ°μ μΌλ‘ μμ λ° κ΄μ± μ’νκ³λ₯Ό μ°κ²°νκΈ° λλ¬Έμ μ¬μ μ μ ννκ² κ³μ°λμ΄μΌ νλ€. λ³Έ λ
Όλ¬Έμ λ‘λ² νλ²μ μν΄ μ§μ λ° μ μμ μΈ λΉ λ³νλ₯Ό μ€λͺ
νλ μ§μ λ°©μ μμ μ€λλ©νΈλ¦¬μ λ²ν· λ°κΈ° λͺ¨λΈμ μ μνλ€. λν, λ³Έ μ°κ΅¬μμλ μ€νΈλ μ€ μΉ΄λ©λΌμμ μΈ‘μ λ νΉμ§μ μ μ΄μ©νμ¬ κ΄μ±μΈ‘μ μ₯μΉμ μΉ΄λ©λΌκ°μ μκ° μ€νμ
κ³Ό μλ μμΉ λ° μμΈλ₯Ό μΆμ νλ μκ° λ³΄μ μμκ΄μ± νλ² μκ³ λ¦¬μ¦μ μ μνλ€. νΉν, μ μνλ μμκ΄μ± μκ³ λ¦¬μ¦μ νμ₯ μΉΌλ§ νν°μ κΈ°λ°νλ©° κ΅μ νλΌλ―Έν°λ₯Ό νν°μ μνλ³μμ νμ₯νμλ€. μ μν μ§μ λ°©μ μμ μ€λλ©νΈλ¦¬λ λ¬ μ μ¬νκ²½μμ 촬μλ μ€νμμ€ λ°μ΄ν°μ
μ ν΅ν΄ κ·Έ μ±λ₯μ κ²μ¦νμλ€. λν μμ© μΌμ λ° λ‘λ² νλ«νΌμ μ΄μ©νμ¬ ν
μ€νΈ λ‘λ²λ₯Ό μ€κ³νμκ³ , μ΄λ₯Ό ν΅ν΄ μμκ΄μ± μμ€ν
μ μκ° λ³΄μ ν κ²½μ° κ·Έλ μ§ μμ κ²½μ° λ³΄λ€ νκΈ° μμΉ μ€μ°¨(return position error)κ° 76.4% κ°μλ¨μ νμΈνμλ€.Abstract
Contents
List of Tables
List of Figures
Chapter 1 Introduction
1.1 Motivation and background
1.2 Objectives and contributions
Chapter 2 Related Works
2.1 Visual odometry
2.2 Visual-inertial odometry
Chapter 3 Direct Visual Odometry at Outdoor
3.1 Direct visual odometry
3.1.1 Notations
3.1.2 Camera projection model
3.1.3 Photometric error
3.2 The proposed algorithm
3.2.1 Problem formulation
3.2.2 Bucketed illumination model
3.2.3 Adaptive prior weight
3.3 Experimental results
3.3.1 Synthetic image sequences
3.3.2 MAV datasets
3.3.3 Planetary rover datasets
Chapter 4 Self-Calibrated Visual-Inertial Odometry
4.1 State representation
4.1.1 IMU state
4.1.2 Calibration parameter state
4.2 State-propagation
4.3 Measurement-update
4.3.1 Point feature measurement
4.3.2 Measurement error modeling
4.4 Experimental results
4.4.1 Hardware setup
4.4.2 Vision front-end design
4.4.3 Rover field testing
Chapter 5 Conclusions
5.1 Conclusion and summary
5.2 Future works
Bibliography
Chapter A Derivation of Photometric Error Jacobian
κ΅λ¬Έ μ΄λ‘Maste
- β¦