1,764 research outputs found

    ๋‹ฌ ์ฐฉ๋ฅ™ ํ•ญ๋ฒ•์„ ์œ„ํ•œ ์ด๋ฒคํŠธ์™€ ํ”„๋ ˆ์ž„ ๊ธฐ๋ฐ˜ ํŠน์ง•์  ์ถ”์ 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ•ญ๊ณต์šฐ์ฃผ๊ณตํ•™๊ณผ, 2021.8. ๊น€์˜ˆ์ค€.This paper proposes a 2-step feature tracking method (2-step EKLT) using intensity measurements (frames) and a stream of events in lunar landing navigation. Light conditions and motion blur are critical factors for optical navigation in lunar landing. The properties of event cameras, robust to high dynamic range and high dynamic motion scenarios, will give freedom in designing lunar landing scenarios. Built upon the recent seminal work on the event and frame based feature tracker (EKLT), we propose a 2-step strategy to eliminate drift and outliers in feature tracking. The translation of features is updated repeatedly with a stream of events. Then, features are refined using homography after a certain distance. Furthermore, we generate photo-realistic lunar landing frames and events based on an open-source event simulator and verify the feature tracking performance of 2-step EKLT. As a result, the localization performance using 2-step EKLT is improved compared to EKLT.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋‹ฌ ์ฐฉ๋ฅ™ ํ•ญ๋ฒ•์—์„œ ๋น›์˜ ๋ฐ๊ธฐ (ํ”„๋ ˆ์ž„)์™€ ์ด๋ฒคํŠธ ์ŠคํŠธ๋ฆผ์„ ์ธก์ •์น˜๋กœ ์ด์šฉํ•˜๋Š” 2 ๋‹จ๊ณ„ ํŠน์ง•์  ์ถ”์  ๋ฐฉ๋ฒ• (2-step EKLT)์„ ์ œ์•ˆํ•œ๋‹ค. ๊ด‘์› ์กฐ๊ฑด๊ณผ ๋ชจ์…˜ ๋ธ”๋Ÿฌ๋Š” ๋‹ฌ ์ฐฉ๋ฅ™์‹œ ์ด๋ฏธ์ง€๋ฅผ ์ด์šฉํ•œ ํ•ญ๋ฒ•์˜ ์„ฑ๋Šฅ์„ ๊ฒฐ์ •ํ•˜๋Š” ์š”์†Œ์ด๋‹ค. ๋งŒ์•ฝ ๊ณ ์†์˜ ๋™์  ์ƒํ™ฉ๊ณผ ํ•˜์ด ๋‹ค์ด๋‚˜๋ฏน ๋ ˆ์ธ์ง€ ์ƒํ™ฉ์— ๊ฐ•๊ฑดํ•˜๋‹ค๋Š” ์ด๋ฒคํŠธ ์นด๋ฉ”๋ผ์˜ ํŠน์„ฑ์„ ์ด์šฉํ•œ๋‹ค๋ฉด, ๋‹ฌ ์ฐฉ๋ฅ™ ์‹œ๋‚˜๋ฆฌ์˜ค ์„ค๊ณ„์— ๋” ๋†’์€ ์ž์œ ๋„๋ฅผ ๋ถ€์—ฌ ํ•  ๊ฒƒ์ด๋‹ค. 2-step EKLT๋Š” ์ตœ๊ทผ ์ค‘์š”ํ•œ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ ์ค‘ ํ•˜๋‚˜์ธ ์ด๋ฒคํŠธ ๋ฐ ํ”„๋ ˆ์ž„ ๊ธฐ๋ฐ˜ ํŠน์ง•์  ์ถ”์ ๊ธฐ (EKLT)์— ์™€ํ•‘ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ 2 ๋‹จ๊ณ„์— ๊ฑธ์ณ ๊ณ„์‚ฐํ•˜๋Š” ์ „๋žต์„ ์ ์šฉํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด๋‹ค. ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํŠน์ง•์ ์˜ ๋“œ๋ฆฌํ”„ํŠธ ๋ฐ ์ด์ƒ์น˜๋ฅผ ์ œ๊ฑฐํ•  ์ˆ˜ ์žˆ๋‹ค. ์ผ์ฐจ์ ์œผ๋กœ ํŠน์ง•์ ์˜ ์œ„์น˜๊ฐ€ ์ด๋ฒคํŠธ ์ŠคํŠธ๋ฆผ๊ณผ ํ•จ๊ป˜ ๋ฐ˜๋ณต์ ์œผ๋กœ ์—…๋ฐ์ดํŠธ ํ•œ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ํŠน์ง•์ ๋“ค์ด ์ผ์ • ๊ฑฐ๋ฆฌ ์ด์ƒ์„ ์›€์ง์ด๋ฉด ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ง•์˜ ์œ„์น˜์™€ ๋ชจ์–‘์„ ๋ณด์ •ํ•œ๋‹ค. ์‹คํ—˜์—์„œ๋Š” ์˜คํ”ˆ ์†Œ์Šค ์ด๋ฒคํŠธ ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์‹ค์ ์ธ ๋‹ฌ ์ฐฉ๋ฅ™ ํ”„๋ ˆ์ž„ ๋ฐ ์ด๋ฒคํŠธ๋ฅผ ์ƒ์„ฑํ•˜์˜€๊ณ , 2-step EKLT์˜ ํŠน์ง•์  ์ถ”์  ์„ฑ๋Šฅ์„ ๊ฒ€์ฆํ–ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ 2-step EKLT๋ฅผ ์ด์šฉํ–ˆ์„ ๋•Œ ํ•ญ๋ฒ• ์„ฑ๋Šฅ์ด EKLT์— ๋น„ํ•ด ํ–ฅ์ƒ ๋˜์—ˆ๋‹ค.Chapter 1 Introduction 1 1.1 Motivation and background 1 1.2 Objectives and contributions 3 Chapter 2 Related Works 5 2.1 Warping models in event based methods 5 2.2 Homography based navigation of UAVs 8 Chapter 3 EKLT: Event and Frame Based Feature Tracking 10 3.1 Events & DAVIS 10 3.2 Algorithm overview 11 3.3 Optimization framework 14 3.3.1 Brightness-increment measure 14 3.3.2 Brightness-increment estimation 14 3.3.3 Examples of brightness-increment in EKLT 15 Chapter 4 2-step EKLT for Lunar Landing Navigation 17 4.1 Warping models 17 4.2 2-step EKLT 21 4.2.1 1st step 21 4.2.2 2nd step 21 4.2.3 Algorithm flow 22 4.2.4 Discussions 26 4.3 Lunar landing simulation dataset 26 4.3.1 Trajectories 27 4.3.2 ESIM: Open event camera simulator 29 4.4 Simulation results 33 4.4.1 Feature tracking results 33 4.4.2 Visual inertial odometry results 44 4.5 Experiment results 49 Chapter 5 Conclusion 55 5.1 Conclusion and summary 55 5.2 Future works 55 Bibliography 57 ๊ตญ๋ฌธ์ดˆ๋ก 62์„

    Homography-Based State Estimation for Autonomous Exploration in Unknown Environments

    Get PDF
    This thesis presents the development of vision-based state estimation algorithms to enable a quadcopter UAV to navigate and explore a previously unknown GPS denied environment. These state estimation algorithms are based on tracked Speeded-Up Robust Features (SURF) points and the homography relationship that relates the camera motion to the locations of tracked planar feature points in the image plane. An extended Kalman filter implementation is developed to perform sensor fusion using measurements from an onboard inertial measurement unit (accelerometers and rate gyros) with vision-based measurements derived from the homography relationship. Therefore, the measurement update in the filter requires the processing of images from a monocular camera to detect and track planar feature points followed by the computation of homography parameters. The state estimation algorithms are designed to be independent of GPS since GPS can be unreliable or unavailable in many operational environments of interest such as urban environments. The state estimation algorithms are implemented using simulated data from a quadcopter UAV and then tested using post processed video and IMU data from flights of an autonomous quadcopter. The homography-based state estimation algorithm was effective, but accumulates drift errors over time due to the relativistic homography measurement of position

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Airborne vision-based attitude estimation and localisation

    Get PDF
    Vision plays an integral part in a pilot's ability to navigate and control an aircraft. Therefore Visual Flight Rules have been developed around the pilot's ability to see the environment outside of the cockpit in order to control the attitude of the aircraft, to navigate and to avoid obstacles. The automation of these processes using a vision system could greatly increase the reliability and autonomy of unmanned aircraft and flight automation systems. This thesis investigates the development and implementation of a robust vision system which fuses inertial information with visual information in a probabilistic framework with the aim of aircraft navigation. The horizon appearance is a strong visual indicator of the attitude of the aircraft. This leads to the first research area of this thesis, visual horizon attitude determination. An image processing method was developed to provide high performance horizon detection and extraction from camera imagery. A number of horizon models were developed to link the detected horizon to the attitude of the aircraft with varying degrees of accuracy. The second area investigated in this thesis was visual localisation of the aircraft. A terrain-aided horizon model was developed to estimate the position, altitude as well as attitude of the aircraft. This gives rough positions estimates with highly accurate attitude information. The visual localisation accuracy was improved by incorporating ground feature-based map-aided navigation. Road intersections were detected using a developed image processing algorithm and then they were matched to a database to provide positional information. The developed vision system show comparable performance to other non-vision-based systems while removing the dependence on external systems for navigation. The vision system and techniques developed in this thesis helps to increase the autonomy of unmanned aircraft and flight automation systems for manned flight

    Deeply-Integrated Feature Tracking for Embedded Navigation

    Get PDF
    The Air Force Institute of Technology (AFIT) is investigating techniques to improve aircraft navigation using low-cost imaging and inertial sensors. Stationary features tracked within the image are used to improve the inertial navigation estimate. These features are tracked using a correspondence search between frames. Previous research investigated aiding these correspondence searches using inertial measurements (i.e., stochastic projection). While this research demonstrated the benefits of further sensor integration, it still relied on robust feature descriptors (e.g., SIFT or SURF) to obtain a reliable correspondence match in the presence of rotation and scale changes. Unfortunately, these robust feature extraction algorithms are computationally intensive and require significant resources for real-time operation. Simpler feature extraction algorithms are much more efficient, but their feature descriptors are not invariant to scale, rotation, or affine warping which limits matching performance during arbitrary motion. This research uses inertial measurements to predict not only the location of the feature in the next image but also the feature descriptor, resulting in robust correspondence matching with low computational overhead. This novel technique, called deeply-integrated feature tracking, is exercised using real imagery. The term deep integration is derived from the fact inertial information is used to aid the image processing. The navigation experiments presented demonstrate the performance of the new algorithm in relation to the previous work. Further experiments also investigate a monocular camera setup necessary for actual flight testing. Results show that the new algorithm is 12 times faster than its predecessor while still producing an accurate trajectory. Thirty-percent more features were initialized using the new tracker over the previous algorithm. However, low-level aiding techniques successfully reduced the number of features initialized indicating a more robust tracking solution through deep integration

    Urban Environment Navigation with Real-Time Data Utilizing Computer Vision, Inertial, and GPS Sensors

    Get PDF
    The purpose of this research was to obtain a navigation solution that used real data, in a degraded or denied global positioning system (GPS) environment, from low cost commercial o the shelf sensors. The sensors that were integrated together were a commercial inertial measurement unit (IMU), monocular camera computer vision algorithm, and GPS. Furthermore, the monocular camera computer vision algorithm had to be robust enough to handle any camera orientation that was presented to it. This research develops a visual odometry 2-D zero velocity measurement that is derived by both the features points that are extracted from a monocular camera and the rotation values given by an IMU. By presenting measurements as a 2-D zero velocity measurements, errors associated with scale, which is unobservable by a monocular camera, can be removed from the measurements. The 2-D zero velocity measurements are represented as two normalized velocity vectors that are orthogonal to the vehicle\u27s direction of travel, and are used to determine the error in the INS\u27s measured velocity vector. This error is produced by knowing which directions the vehicle is not moving, given by the 2-D zero velocity measurements, in and comparing it to the direction of travel the vehicle is thought to be moving in. The performance was evaluated by comparing results that were obtained when different sensor pairings of a commercial IMU, GPS, and monocular computer vision algorithm were used to obtain the vehicle\u27s trajectory. Three separate monocular cameras, that each pointed in a different directions, were tested independently. Finally, the solutions provided by the GPS were degraded (i.e., the number of satellites available from the GPS were limited) to determine the e effectiveness of adding a monocular computer vision algorithm to a system operating with a degraded GPS solution

    Terrain Referenced Navigation Using SIFT Features in LiDAR Range-Based Data

    Get PDF
    The use of GNSS in aiding navigation has become widespread in aircraft. The long term accuracy of INS are enhanced by frequent updates of the highly precise position estimations GNSS provide. Unfortunately, operational environments exist where constant signal or the requisite number of satellites are unavailable, significantly degraded, or intentionally denied. This thesis describes a novel algorithm that uses scanning LiDAR range data, computer vision features, and a reference database to generate aircraft position estimations to update drifting INS estimates. The algorithm uses a single calibrated scanning LiDAR to sample the range and angle to the ground as an aircraft flies, forming a point cloud. The point cloud is orthorectified into a coordinate system common to a previously recorded reference of the flyover region. The point cloud is then interpolated into a Digital Elevation Model (DEM) of the ground. Range-based SIFT features are then extracted from both the airborne and reference DEMs. Features common to both the collected and reference range images are selected using a SIFT descriptor search. Geometrically inconsistent features are filtered out using RANSAC outlier removal, and surviving features are projected back to their source coordinates in the original point cloud. The point cloud features are used to calculate a least squares correspondence transform that aligns the collected features to the reference features. Applying the correspondence that best aligns the ground features is then applied to the nominal aircraft position, creating a new position estimate. The algorithm was tested on legacy flight data and typically produces position estimates within 10 meters of truth using threshold conditions

    Vision-Aided Navigation using Tracked Lankmarks

    Get PDF
    This thesis presents vision-based state estimation algorithms for autonomous vehicles to navigate within GPS-denied environments. To accomplish this objective, an approach is developed that utilizes a priori information about the environment. In particular, the algorithm leverages recognizable โ€˜landmarksโ€™ in the environment, the positions of which are known in advance, to stabilize the state estimate. Measurements of the position of one or more landmarks in the image plane of a monocular camera are then filtered using an extended Kalman filter (EKF) with data from a traditional inertial measurement unit (IMU) consisting of accelerometers and rate gyros to produce the state estimate. Additionally, the EKF algorithm is adapted to accommodate a stereo camera configuration to measure the distance to a landmark using parallax. The performances of the state estimation algorithms for both the monocular and stereo camera configurations are tested and compared using simulation studies with a quadcopter UAV model. State estimation results are then presented using flight data from a quadcopter UAV instrumented with an IMU and a GoPro camera. It is shown that the proposed landmark navigation method is capable of preventing IMU drift errors by providing a GPS-like measurement when landmarks can be identified. Additionally, the landmark method pairs well with non a priori measurements for interims when landmarks are not available
    • โ€ฆ
    corecore