9 research outputs found

    Review and classification of vision-based localisation techniques in unknown environments

    Get PDF
    International audienceThis study presents a review of the state-of-the-art and a novel classification of current vision-based localisation techniques in unknown environments. Indeed, because of progresses made in computer vision, it is now possible to consider vision-based systems as promising navigation means that can complement traditional navigation sensors like global navigation satellite systems (GNSSs) and inertial navigation systems. This study aims to review techniques employing a camera as a localisation sensor, provide a classification of techniques and introduce schemes that exploit the use of video information within a multi-sensor system. In fact, a general model is needed to better compare existing techniques in order to decide which approach is appropriate and which are the innovation axes. In addition, existing classifications only consider techniques based on vision as a standalone tool and do not consider video as a sensor among others. The focus is addressed to scenarios where no a priori knowledge of the environment is provided. In fact, these scenarios are the most challenging since the system has to cope with objects as they appear in the scene without any prior information about their expected position

    Event-Based Visual-Inertial Odometry Using Smart Features

    Get PDF
    Event-based cameras are a novel type of visual sensor that operate under a unique paradigm, providing asynchronous data on the log-level changes in light intensity for individual pixels. This hardware-level approach to change detection allows these cameras to achieve ultra-wide dynamic range and high temporal resolution. Furthermore, the advent of convolutional neural networks (CNNs) has led to state-of-the-art navigation solutions that now rival or even surpass human engineered algorithms. The advantages offered by event cameras and CNNs make them excellent tools for visual odometry (VO). This document presents the implementation of a CNN trained to detect and describe features within an image as well as the implementation of an event-based visual-inertial odometry (EVIO) pipeline, which estimates a vehicle\u27s 6-degrees-offreedom (DOF) pose using an affixed event-based camera with an integrated inertial measurement unit (IMU). The front-end of this pipeline utilizes a neural network for generating image frames from asynchronous event camera data. These frames are fed into a multi-state constraint Kalman filter (MSCKF) back-end that uses the output of the developed CNN to perform measurement updates. The EVIO pipeline was tested on a selection from the Event-Camera Dataset [1], and on a dataset collected from a fixed-wing unmanned aerial vehicle (UAV) flight test conducted by the Autonomy and Navigation Technology (ANT) Center

    Untersuchungen zur Erhöhung der Stabilität bildbasierter Bewegungsschätzung bei Flügen in Bodennähe

    Get PDF
    Optische Navigation ist ein weit verbreitetes Hilfsmittel bei der Navigation von Fluggeräten. Sie wird häufig eingesetzt, wenn das Vorhandensein von Satellitensignalen zur Positionsbestimmung nicht gewährleistet werden kann. Für gewöhnlich wird sie verwendet um die Position der Fluggeräte und von Hindernissen zu detektieren. Ein weiterer möglicher Einsatzzweck ist die Messung der Fluggeschwindigkeit. Wird eine zu hohe Fluggeschwindigkeit bei der Landung eines Fluggeräts nicht detektiert, kann diese dazu führen, dass es sich bei der Landung überschlägt. Bei Flügen in Bodennähe können Phänomene auftreten, die im Bereich der optischen Navigation von Fluggeräten bisher wenig Beachtung fanden und deren Funktionsfähigkeit beeinträchtigen können: eine Einschränkung des Sichtfeldes durch aufgewirbelten Staub sowie das Vorhandensein des vom Fluggerät geworfenen Schattens im auszuwertenden Kamerabild. Diese Arbeit beschreibt die Entwicklung von Hilfsmitteln, die das Einsatzspektrum sowie die Zuverlässigkeit von Verfahren der optischen Navigation unter den genannten erschwerten Sichtbedingungen in Bodennähe erhöhen sollen. Ziel ist es hierbei, die Möglichkeit zu schaffen, eine für eine sichere Landung zu hohe Fluggeschwindigkeit zu detektieren. Bei den entwickelten und untersuchten Hilfsmitteln handelt es sich um die Schätzung der Bewegungsgeschwindigkeit von Fluggeräten bei unbeeinträchtigter und bei beeinträchtigter Sicht sowie der Erkennung des vom Fluggerät geworfenen Schattens. Die entwickelten Hilfsmittel werden anhand von aufgezeichneten Flugversuchen ausgewertet, welche von zwei Fluggeräten des Deutschen Zentrums für Luft- und Raumfahrt durchgeführt wurden

    Development of GNSS/INS/SLAM Algorithms for Navigation in Constrained Environments

    Get PDF
    For land vehicles, the requirements of the navigation solution in terms of accuracy, integrity, continuity and availability are more and more stringent, especially with the development of autonomous vehicles. This type of application requires a navigation system not only capable of providing an accurate and reliable position, velocity and attitude solution continuously but also having a reasonable cost. In the last decades, GNSS has been the most widely used navigation system especially with the receivers decreasing cost over the years. However, despite of its capability to provide absolute navigation information with long time accuracy, this system suffers from problems related to signal propagation especially in urban environments where buildings, trees and other structures hinder the reception of GNSS signals and degrade their quality. This can result in significant positioning error exceeding in some cases a kilometer. Many techniques are proposed in the literature to mitigate these problems and improve the GNSS accuracy. Unfortunately, all these techniques have limitations. A possible way to overcome these problems is to fuse “good” GNSS measurements with other sensors having complementary characteristics. In fact, by exploiting the complementarity of sensors, hybridization algorithms can improve the navigation solution compared to solutions provided by each stand-alone sensor. Generally, the most widely implemented hybridization algorithms for land vehicles fuse GNSS measurements with inertial and/or odometric data. Thereby, these Dead-Reckoning (DR) sensors ensure the system continuity when GNSS information is unavailable and improve the system performance when GNSS signals are degraded, and, in return the GNSS limits the drift of the DR solution if it is available. However the performance achieved by this hybridization depends thoroughly on the quality of the DR sensor used especially when GNSS signals are degraded or unavailable. Therefore, this Ph.D. thesis, which is part of a common French research project involving two laboratories and three companies, aims at extending the classical hybridization architecture by including other sensors capable of improving the navigation performances while having a low cost and being easily embeddable. For this reason, the use of vision-based navigation techniques to provide additional information is proposed in this thesis. In fact, cameras have become an attractive positioning sensor recently with the development of Visual Odometry and Simultaneous Localization and Mapping (SLAM) techniques, capable of providing accurate navigation solution while having reasonable cost. In addition, visual navigation solutions have a good quality in textured environments where GNSS is likely to encounter bad performance. Therefore, this work focuses on developing a multi-sensor fusion architecture integrating visual information with the previously mentioned sensors. In particular, the contribution of this information to improve the vision-free navigation system performance is highlighted. The proposed architecture respects the project constraints consisting of developing a versatile and modular low-cost system capable of providing continuously a good navigation solution, where each sensor may be easily discarded when its information should not be used in the navigation solutio

    Correlation-based visual odometry for ground vehicles

    No full text
    Reliable motion estimation is a key component for autonomous vehicles. We present a visual odometry method for ground vehicles using template matching. The method uses a downward-facing camera perpendicular to the ground and estimates the motion of the vehicle by analyzing the image shift from frame to frame. Specifically, an image region (template) is selected, and using correlation we find the corresponding image region in the next frame. We introduce the use of multitemplate correlation matching and suggest template quality measures for estimating the suitability of a template for the purpose of correlation. Several aspects of the template choice are also presented. Through an extensive analysis, we derive the expected theoretical error rate of our system and show its dependence on the template window size and image noise. We also show how a linear forward prediction filter can be used to limit the search area to significantly increase the computation performance. Using a single camera and assuming an Ackerman-steering model, the method has been implemented successfully on a large industrial forklift and a 4Ă—4 vehicle. Over 6 km of field trials from our industrial test site, an off-road area and an urban environment are presented illustrating the applicability of the method as an independent sensor for large vehicle motion estimation at practical velocities
    corecore