1,757 research outputs found

    Review and classification of vision-based localisation techniques in unknown environments

    Get PDF
    International audienceThis study presents a review of the state-of-the-art and a novel classification of current vision-based localisation techniques in unknown environments. Indeed, because of progresses made in computer vision, it is now possible to consider vision-based systems as promising navigation means that can complement traditional navigation sensors like global navigation satellite systems (GNSSs) and inertial navigation systems. This study aims to review techniques employing a camera as a localisation sensor, provide a classification of techniques and introduce schemes that exploit the use of video information within a multi-sensor system. In fact, a general model is needed to better compare existing techniques in order to decide which approach is appropriate and which are the innovation axes. In addition, existing classifications only consider techniques based on vision as a standalone tool and do not consider video as a sensor among others. The focus is addressed to scenarios where no a priori knowledge of the environment is provided. In fact, these scenarios are the most challenging since the system has to cope with objects as they appear in the scene without any prior information about their expected position

    Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification

    Get PDF
    In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology. This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems

    GNSS/Multi-Sensor Fusion Using Continuous-Time Factor Graph Optimization for Robust Localization

    Full text link
    Accurate and robust vehicle localization in highly urbanized areas is challenging. Sensors are often corrupted in those complicated and large-scale environments. This paper introduces GNSS-FGO, an online and global trajectory estimator that fuses GNSS observations alongside multiple sensor measurements for robust vehicle localization. In GNSS-FGO, we fuse asynchronous sensor measurements into the graph with a continuous-time trajectory representation using Gaussian process regression. This enables querying states at arbitrary timestamps so that sensor observations are fused without requiring strict state and measurement synchronization. Thus, the proposed method presents a generalized factor graph for multi-sensor fusion. To evaluate and study different GNSS fusion strategies, we fuse GNSS measurements in loose and tight coupling with a speed sensor, IMU, and lidar-odometry. We employed datasets from measurement campaigns in Aachen, Duesseldorf, and Cologne in experimental studies and presented comprehensive discussions on sensor observations, smoother types, and hyperparameter tuning. Our results show that the proposed approach enables robust trajectory estimation in dense urban areas, where the classic multi-sensor fusion method fails due to sensor degradation. In a test sequence containing a 17km route through Aachen, the proposed method results in a mean 2D positioning error of 0.19m for loosely coupled GNSS fusion and 0.48m while fusing raw GNSS observations with lidar odometry in tight coupling.Comment: Revision of arXiv:2211.0540

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Enhanced Subsea Acoustically Aided Inertial Navigation

    Get PDF

    Real-Time GPS-Alternative Navigation Using Commodity Hardware

    Get PDF
    Modern navigation systems can use the Global Positioning System (GPS) to accurately determine position with precision in some cases bordering on millimeters. Unfortunately, GPS technology is susceptible to jamming, interception, and unavailability indoors or underground. There are several navigation techniques that can be used to navigate during times of GPS unavailability, but there are very few that result in GPS-level precision. One method of achieving high precision navigation without GPS is to fuse data obtained from multiple sensors. This thesis explores the fusion of imaging and inertial sensors and implements them in a real-time system that mimics human navigation. In addition, programmable graphics processing unit technology is leveraged to perform stream-based image processing using a computer\u27s video card. The resulting system can perform complex mathematical computations in a fraction of the time those same operations would take on a CPU-based platform. The resulting system is an adaptable, portable, inexpensive and self-contained software and hardware platform, which paves the way for advances in autonomous navigation, mobile cartography, and artificial intelligence

    Study of Future On-board GNSS/INS Hybridization Architectures

    Get PDF
    Un développement rapide et une densification du trafic aérien ont conduit à l'introduction de nouvelles opérations d'approches et d'atterrissage utilisant des trajectoires plus flexibles et des minimas plus exigeants. La plupart des opérations de navigation aérienne sont actuellement réalisées grâce au GNSS, augmenté par les systèmes GBAS, SBAS et ABAS qui permettent d'atteindre des opérations d'approches de précision (pour GBAS et SBAS). Cependant ces systèmes nécessitent la mise en place d'un réseau de station de référence relativement couteux et des diffusions constantes de messages aux utilisateurs de l'espace aérien. Afin de surmonter ces contraintes, le système ABAS intègre à bord des informations fournies par les systèmes de navigation inertielle (INS) ainsi améliorant les performances de navigation. Dans cette logique, les avions commerciaux actuels utilisent une solution de couplage des deux systèmes appelée hybridation GPS/baro-INS. Cette solution permet d'atteindre des niveaux de performance en termes de précision, intégrité, disponibilité et continuité supérieurs aux deux systèmes pris séparément. Malheureusement, les niveaux d'exigences requis par les opérations de précision ou les atterrissages automatiques ne peuvent pas encore être totalement couverts par les solutions d'hybridation actuelles. L'idée principale de cette thèse a été d'étendre le processus d'hybridation en incluant d'autres capteurs ou systèmes actuellement disponibles ou non à bord et d'évaluer les niveaux de performance atteints par cette solution de filtre d'hybridation global. L'objectif ciblé est de pouvoir fournir la plupart des paramètres de navigations pour les opérations les plus critiques avec le niveau de performance requis par les exigences OACI. Les opérations ciblées pendant l'étude étaient les approches de précision (en particulier les approches CAT III) et le roulage sur la piste. L'étude des systèmes vidéo a fait l'objet d'une attention particulière pendant la thèse. La navigation basée sur la vidéo est une solution autonome de navigation de plus en plus utilisée de nos jours axée sur des capteurs qui mesurent le mouvement du véhicule et observent l'environnement. Que cela soit pour compenser la perte ou la dégradation d'un des systèmes de navigation ou pour améliorer la solution existante, les intérêts de l'utilisation de la vidéo sont nombreux. ABSTRACT : The quick development of air traffic has led to the improvement of approach and landing operations by using flexible flight paths and by decreasing the minima required to perform these operations. Most of the aircraft operations are supported by the GNSS augmented with GBAS, SBAS and ABAS. SBAS or GBAS allow supporting navigation operations down to precision approaches. However, these augmentations do require an expensive network of reference receivers and real-time broadcast to the airborne user. To overcome, the ABAS system integrates on-board information provided by an INS so as to enhance the performance of the navigation system. In that scheme, INS is coupled with a GPS receiver in a GPS/baro-INS hybridization solution that is already performed on current commercial aircraft. This solution allows reaching better performance in terms of accuracy, integrity, availability and continuity than the two separated solutions. However the most stringent requirements for precision approaches or automatic landings cannot be fulfilled with the current hybridization. The main idea of this PhD study is then to extend the hybridization process by including other sensors already available on commercial aircraft or not and, to assess the performance reached by a global hybridization architecture. It aims at providing most of the navigation parameters in all operations with the required level of performance. The operations targeted by this hybridization are precision approaches, with a particular focus on CAT III precision approach and roll out on the runway. The study of video sensor has been particularly focused on in the thesis. Indeed video based navigation is a complete autonomous navigation opportunity only based on sensors that provide information from the dynamic of the vehicle and from the observation of the scenery. Moreover, from a possible compensation of any loss or degradation of a navigation system to the improvement of the navigation solution during the most critical operations, the interests of video are numerous

    Sky-GVINS: a Sky-segmentation Aided GNSS-Visual-Inertial System for Robust Navigation in Urban Canyons

    Full text link
    Integrating Global Navigation Satellite Systems (GNSS) in Simultaneous Localization and Mapping (SLAM) systems draws increasing attention to a global and continuous localization solution. Nonetheless, in dense urban environments, GNSS-based SLAM systems will suffer from the Non-Line-Of-Sight (NLOS) measurements, which might lead to a sharp deterioration in localization results. In this paper, we propose to detect the sky area from the up-looking camera to improve GNSS measurement reliability for more accurate position estimation. We present Sky-GVINS: a sky-aware GNSS-Visual-Inertial system based on a recent work called GVINS. Specifically, we adopt a global threshold method to segment the sky regions and non-sky regions in the fish-eye sky-pointing image and then project satellites to the image using the geometric relationship between satellites and the camera. After that, we reject satellites in non-sky regions to eliminate NLOS signals. We investigated various segmentation algorithms for sky detection and found that the Otsu algorithm reported the highest classification rate and computational efficiency, despite the algorithm's simplicity and ease of implementation. To evaluate the effectiveness of Sky-GVINS, we built a ground robot and conducted extensive real-world experiments on campus. Experimental results show that our method improves localization accuracy in both open areas and dense urban environments compared to the baseline method. Finally, we also conduct a detailed analysis and point out possible further directions for future research. For detailed information, visit our project website at https://github.com/SJTU-ViSYS/Sky-GVINS
    corecore