172 research outputs found

    Optimal Image-Aided Inertial Navigation

    Get PDF
    The utilization of cameras in integrated navigation systems is among the most recent scientific research and high-tech industry development. The research is motivated by the requirement of calibrating off-the-shelf cameras and the fusion of imaging and inertial sensors in poor GNSS environments. The three major contributions of this dissertation are The development of a structureless camera auto-calibration and system calibration algorithm for a GNSS, IMU and stereo camera system. The auto-calibration bundle adjustment utilizes the scale restraint equation, which is free of object coordinates. The number of parameters to be estimated is significantly reduced in comparison with the ones in a self-calibrating bundle adjustment based on the collinearity equations. Therefore, the proposed method is computationally more efficient. The development of a loosely-coupled visual odometry aided inertial navigation algorithm. The fusion of the two sensors is usually performed using a Kalman filter. The pose changes are pairwise time-correlated, i.e. the measurement noise vector at the current epoch is only correlated with the one from the previous epoch. Time-correlated errors are usually modelled by a shaping filter. The shaping filter developed in this dissertation uses Cholesky factors as coefficients derived from the variance and covariance matrices of the measurement noise vectors. Test results with showed that the proposed algorithm performs better than the existing ones and provides more realistic covariance estimates. The development of a tightly-coupled stereo multi-frame aided inertial navigation algorithm for reducing position and orientation drifts. Usually, the image aiding based on the visual odometry uses the tracked features only from a pair of the consecutive image frames. The proposed method integrates the features tracked from multiple overlapped image frames for reducing the position and orientation drifts. The measurement equation is derived from SLAM measurement equation system where the landmark positions in SLAM are algebraically by time-differencing. However, the derived measurements are time-correlated. Through a sequential de-correlation, the Kalman filter measurement update can be performed sequentially and optimally. The main advantages of the proposed algorithm are the reduction of computational requirements when compared to SLAM and a seamless integration into an existing GNSS aided-IMU system

    Combined Stereo Vision and Inertial Navigation for Automated Aerial Refueling

    Get PDF
    This paper describes the design of an EKF to obtain the precise relative position of two aircraft in a refueling maneuver while operating in GPS denied environments. The EKF uses the INS already present in both aircraft as well as the stereo camera system organic to new tanker systems. The aircraft trajectories are generated according to authentic refueling profiles with flight dynamics software and executed in a 3D virtual environment to enable deterministic simulation of the stereo camera system and to demonstrate the effectiveness of the combined system in a refueling scenario. Results show the system can achieve sufficient accuracy utilizing only SMV and INS measurements, though the system is capable of incorporating GPS measurements when available for an additional increase in accuracy

    State of the art in vision-based localization techniques for autonomous navigation systems

    Get PDF

    Calibrated uncertainty estimation for SLAM

    Full text link
    La focus de cette thèse de maîtrise est l’analyse de l’étalonnage de l’incertitude pour la lo- calisation et la cartographie simultanées (SLAM) en utilisant des modèles de mesure basés sur les réseaux de neurones. SLAM sont un problème fondamental en robotique et en vision par ordinateur, avec de nombreuses applications allant des voitures autonomes aux réalités augmentées. Au cœur de SLAM, il s’agit d’estimer la pose (c’est-à-dire la position et l’orien- tation) d’un robot ou d’une caméra lorsqu’elle se déplace dans un environnement inconnu et de construire simultanément une carte de l’environnement environnant. Le SLAM visuel, qui utilise des images en entrée, est un cadre de SLAM couramment utilisé. Cependant, les méthodes traditionnelles de SLAM visuel sont basées sur des caractéristiques fabriquées à la main et peuvent être vulnérables à des défis tels que la mauvaise luminosité et l’occultation. L’apprentissage profond est devenu une approche plus évolutive et robuste, avec les réseaux de neurones convolutionnels (CNN) devenant le système de perception de facto en robotique. Pour intégrer les méthodes basées sur les CNN aux systèmes de SLAM, il est nécessaire d’estimer l’incertitude ou le bruit dans les mesures de perception. L’apprentissage profond bayésien a fourni diverses méthodes pour estimer l’incertitude dans les réseaux de neurones, notamment les ensembles, la distribution sur les paramètres du réseau et l’ajout de têtes de prédiction pour les paramètres de distribution de la sortie. Cependant, il est également important de s’assurer que ces estimations d’incertitude sont bien étalonnées, c’est-à-dire qu’elles reflètent fidèlement l’erreur de prédiction. Dans cette thèse de maîtrise, nous abordons ce défi en développant un système de SLAM qui intègre un réseau de neurones en tant que modèle de mesure et des estimations d’in- certitude étalonnées. Nous montrons que ce système fonctionne mieux que les approches qui utilisent la méthode traditionnelle d’estimation de l’incertitude, où les estimations de l’incertitude sont simplement considérées comme des hyperparamètres qui sont réglés ma- nuellement. Nos résultats démontrent l’importance de tenir compte de manière précise de l’incertitude dans le problème de SLAM, en particulier lors de l’utilisation d’un réseau de neur.The focus of this Masters thesis is the analysis of uncertainty calibration for Simultaneous Localization and Mapping (SLAM) using neural network-based measurement models. SLAM is a fundamental problem in robotics and computer vision, with numerous applications rang- ing from self-driving cars to augmented reality. At its core, SLAM involves estimating the pose (i.e., position and orientation) of a robot or camera as it moves through an unknown environment and constructing a map of the surrounding environment simultaneously. Vi- sual SLAM, which uses images as input, is a commonly used SLAM framework. However, traditional Visual SLAM methods rely on handcrafted features and can be vulnerable to challenges such as poor lighting and occlusion. Deep learning has emerged as a more scal- able and robust approach, with Convolutional Neural Networks (CNNs) becoming the de facto perception system in robotics. To integrate CNN-based methods with SLAM systems, it is necessary to estimate the uncertainty or noise in the perception measurements. Bayesian deep learning has provided various methods for estimating uncertainty in neural networks, including ensembles, distribu- tions over network parameters, and adding variance heads for direct uncertainty prediction. However, it is also essential to ensure that these uncertainty estimates are well-calibrated, i.e they accurately reflect the error in the prediction. In this Master’s thesis, we address this challenge by developing a system for SLAM that incorporates a neural network as the measurement model and calibrated uncertainty esti- mates. We show that this system performs better than the approaches which uses traditional uncertainty estimation method, where uncertainty estimates are just considered hyperpa- rameters which are tuned manually. Our results demonstrate the importance of accurately accounting for uncertainty in the SLAM problem, particularly when using a neural network as the measurement model, in order to achieve reliable and robust localization and mapping

    Humanoid odometric localization integrating kinematic, inertial and visual information

    Get PDF
    We present a method for odometric localization of humanoid robots using standard sensing equipment, i.e., a monocular camera, an inertial measurement unit (IMU), joint encoders and foot pressure sensors. Data from all these sources are integrated using the prediction-correction paradigm of the Extended Kalman Filter. Position and orientation of the torso, defined as the representative body of the robot, are predicted through kinematic computations based on joint encoder readings; an asynchronous mechanism triggered by the pressure sensors is used to update the placement of the support foot. The correction step of the filter uses as measurements the torso orientation, provided by the IMU, and the head pose, reconstructed by a VSLAM algorithm. The proposed method is validated on the humanoid NAO through two sets of experiments: open-loop motions aimed at assessing the accuracy of localization with respect to a ground truth, and closed-loop motions where the humanoid pose estimates are used in real-time as feedback signals for trajectory control

    Reliable localization methods for intelligent vehicles based on environment perception

    Get PDF
    Mención Internacional en el título de doctorIn the near past, we would see autonomous vehicles and Intelligent Transport Systems (ITS) as a potential future of transportation. Today, thanks to all the technological advances in recent years, the feasibility of such systems is no longer a question. Some of these autonomous driving technologies are already sharing our roads, and even commercial vehicles are including more Advanced Driver-Assistance Systems (ADAS) over the years. As a result, transportation is becoming more efficient and the roads are considerably safer. One of the fundamental pillars of an autonomous system is self-localization. An accurate and reliable estimation of the vehicle’s pose in the world is essential to navigation. Within the context of outdoor vehicles, the Global Navigation Satellite System (GNSS) is the predominant localization system. However, these systems are far from perfect, and their performance is degraded in environments with limited satellite visibility. Additionally, their dependence on the environment can make them unreliable if it were to change. Accordingly, the goal of this thesis is to exploit the perception of the environment to enhance localization systems in intelligent vehicles, with special attention to their reliability. To this end, this thesis presents several contributions: First, a study on exploiting 3D semantic information in LiDAR odometry is presented, providing interesting insights regarding the contribution to the odometry output of each type of element in the scene. The experimental results have been obtained using a public dataset and validated on a real-world platform. Second, a method to estimate the localization error using landmark detections is proposed, which is later on exploited by a landmark placement optimization algorithm. This method, which has been validated in a simulation environment, is able to determine a set of landmarks so the localization error never exceeds a predefined limit. Finally, a cooperative localization algorithm based on a Genetic Particle Filter is proposed to utilize vehicle detections in order to enhance the estimation provided by GNSS systems. Multiple experiments are carried out in different simulation environments to validate the proposed method.En un pasado no muy lejano, los vehículos autónomos y los Sistemas Inteligentes del Transporte (ITS) se veían como un futuro para el transporte con gran potencial. Hoy, gracias a todos los avances tecnológicos de los últimos años, la viabilidad de estos sistemas ha dejado de ser una incógnita. Algunas de estas tecnologías de conducción autónoma ya están compartiendo nuestras carreteras, e incluso los vehículos comerciales cada vez incluyen más Sistemas Avanzados de Asistencia a la Conducción (ADAS) con el paso de los años. Como resultado, el transporte es cada vez más eficiente y las carreteras son considerablemente más seguras. Uno de los pilares fundamentales de un sistema autónomo es la autolocalización. Una estimación precisa y fiable de la posición del vehículo en el mundo es esencial para la navegación. En el contexto de los vehículos circulando en exteriores, el Sistema Global de Navegación por Satélite (GNSS) es el sistema de localización predominante. Sin embargo, estos sistemas están lejos de ser perfectos, y su rendimiento se degrada en entornos donde la visibilidad de los satélites es limitada. Además, los cambios en el entorno pueden provocar cambios en la estimación, lo que los hace poco fiables en ciertas situaciones. Por ello, el objetivo de esta tesis es utilizar la percepción del entorno para mejorar los sistemas de localización en vehículos inteligentes, con una especial atención a la fiabilidad de estos sistemas. Para ello, esta tesis presenta varias aportaciones: En primer lugar, se presenta un estudio sobre cómo aprovechar la información semántica 3D en la odometría LiDAR, generando una base de conocimiento sobre la contribución de cada tipo de elemento del entorno a la salida de la odometría. Los resultados experimentales se han obtenido utilizando una base de datos pública y se han validado en una plataforma de conducción del mundo real. En segundo lugar, se propone un método para estimar el error de localización utilizando detecciones de puntos de referencia, que posteriormente es explotado por un algoritmo de optimización de posicionamiento de puntos de referencia. Este método, que ha sido validado en un entorno de simulación, es capaz de determinar un conjunto de puntos de referencia para el cual el error de localización nunca supere un límite previamente fijado. Por último, se propone un algoritmo de localización cooperativa basado en un Filtro Genético de Partículas para utilizar las detecciones de vehículos con el fin de mejorar la estimación proporcionada por los sistemas GNSS. El método propuesto ha sido validado mediante múltiples experimentos en diferentes entornos de simulación.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridSecretario: Joshué Manuel Pérez Rastelli.- Secretario: Jorge Villagrá Serrano.- Vocal: Enrique David Martí Muño

    Direct Visual-Inertial Odometry using Epipolar Constraints for Land Vehicles

    Get PDF
    Autonomously operating vehicles are being developed to take over human supervision in applications such as search and rescue, surveillance, exploration and scientific data collection. For a vehicle to operate autonomously, it is important for it to predict its location with respect to its surrounding in order to make decisions about its next movement. Simultaneous Localization and Mapping (SLAM) is a technique that utilizes information from multiple sensors to not only estimate the vehicle's location but also simultaneously build a map of the environment. Substantial research efforts are being devoted to make pose predictions using fewer sensors. Currently, laser scanners, which are expensive, have been used as a primary sensor for environment perception as they measure obstacle distance with good accuracy and generate a point-cloud map of the surrounding. Recently, researchers have used the method of triangulation to generate similar point-cloud maps using only cameras, which are relatively inexpensive. However, point-clouds generated from cameras have an unobservable scale factor. To get an estimate of scale, measurements from an additional sensor such as another camera (stereo configuration), laser scanners, wheel encoders, GPS or IMU, can be used. Wheel encoders are known to suffer from inaccuracies and drifts, using laser scanners is not cost effective, and GPS measurements come with high uncertainty. Therefore, stereo-camera and camera-IMU methods have been topics of constant development for the last decade. A stereo-camera pair is typically used with a graphics processing unit (GPU) to generate a dense environment reconstruction. The scale is estimated from the pre-calculated base-line (distance between camera centers) measurement. However, when the environment features are far away, the base-line becomes negligible to be effectively used for triangulation and the stereo-configuration reduces to monocular. Moreover, when the environment is texture-less, information from visual measurements only cannot be used. An IMU provides metric measurements but suffers from significant drifts. Hence, in a camera-IMU configuration, an IMU typically is used only for short-durations, i.e. in-between two camera frames. This is desirable as it not only helps to estimate the global scale, but also to give a pose estimate during temporary camera failure. Due to these reasons, a camera-IMU configuration is being increasingly used in applications such as in Unmanned Aerial Vehicles (UAVs) and Augmented/ Virtual Reality (AR/VR). This thesis presents a novel method for visual-inertial odometry for land vehicles which is robust to unintended, but unavoidable bumps, encountered when an off-road land vehicle traverses over potholes, speed-bumps or general change in terrain. In contrast to tightly-coupled methods for visual-inertial odometry, the joint visual and inertial residuals is split into two separate steps and the inertial optimization is performed after the direct-visual alignment step. All visual and geometric information encoded in a key-frame are utilized by including the inverse-depth variances in the optimization objective, making this method a direct approach. The primary contribution of this work is the use of epipolar constraints, computed from a direct-image alignment, to correct pose prediction obtained by integrating IMU measurements, while simultaneously building a semi-dense map of the environment in real-time. Through experiments, both indoor and outdoor, it is shown that the proposed method is robust to sudden spikes in inertial measurements while achieving better accuracy than the state-of-the art direct, tightly-coupled visual-inertial fusion method. In the future, the proposed method can be augmented with loop-closure and re-localization to enhance the pose prediction accuracy. Further, semantic segmentation of point-clouds can be useful for applications such as object labeling and generating obstacle-free path

    Autonomous flight in unstructured and unknown indoor environments

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 119-126).This thesis presents the design, implementation, and validation of a system that enables a micro air vehicle to autonomously explore and map unstructured and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, and a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking to achieve these capabilities face unique challenges. While there has been recent progress toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-time state estimation techniques that allow our quadrotor helicopter to fly autonomously in indoor, GPS-denied environments. Accomplishing this feat required the development of a large integrated system that brought together many components into a cohesive package. As such, the primary contribution is the development of the complete working system. I show experimental results that illustrate the MAV's ability to navigate accurately in unknown environments, and demonstrate that our algorithms enable the MAV to operate autonomously in a variety of indoor environments.by Abraham Galton Bachrach.S.M

    Unmanned Aircraft System Navigation in the Urban Environment: A Systems Analysis

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/140665/1/1.I010280.pd

    Correntropy: Answer to non-Gaussian noise in modern SLAM applications?

    Get PDF
    The problem of non-Gaussian noise/outliers has been intrinsic in modern Simultaneous Localization and Mapping (SLAM) applications. Despite numerous algorithms in SLAM, it has become crucial to address this problem in the realm of modern robotics applications. This work focuses on addressing the above-mentioned problem by incorporating the usage of correntropy in SLAM. Before correntropy, multiple attempts of dealing with non-Gaussian noise have been proposed with significant progress over time but the underlying assumption of Gaussianity might not be enough in real-life applications in robotics.Most of the modern SLAM algorithms propose the `best' estimates given a set of sensor measurements. Apart from addressing the non-Gaussian problems in a SLAM system, our work attempts to address the more complex part concerning SLAM: (a) If one of the sensors gives faulty measurements over time (`Faulty' measurements can be non-Gaussian in nature), how should a SLAM framework adapt to such scenarios? (b) In situations where there is a manual intervention or a 3rd party attacker tries to change the measurements and affect the overall estimate of the SLAM system, how can a SLAM system handle such situations?(addressing the Self Security aspect of SLAM). Given these serious situations how should a modern SLAM system handle the issue of the previously mentioned problems in (a) and (b)? We explore the idea of correntropy in addressing the above-mentioned problems in popular filtering-based approaches like Kalman Filters(KF) and Extended Kalman Filters(EKF), which highlights the `Localization' part in SLAM. Later on, we propose a framework of fusing the odometeries computed individually from a stereo sensor and Lidar sensor (Iterative Closest point Algorithm (ICP) based odometry). We describe the effectiveness of using correntropy in this framework, especially in situations where a 3rd party attacker attempts to corrupt the Lidar computed odometry. We extend the usage of correntropy in the `Mapping' part of the SLAM (Registration), which is the highlight of our work. Although registration is a well-established problem, earlier approaches to registration are very inefficient with large rotations and translation. In addition, when the 3D datasets used for alignment are corrupted with non-Gaussian noise (shot/impulse noise), prior state-of-the-art approaches fail. Our work has given birth to another variant of ICP, which we name as Correntropy Similarity Matrix ICP (CoSM-ICP), which is robust to large translation and rotations as well as to shot/impulse noise. We verify through results how well our variant of ICP outperforms the other variants under large rotations and translations as well as under large outliers/non-Gaussian noise. In addition, we deploy our CoSM algorithm in applications where we compute the extrinsic calibration of the Lidar-Stereo sensor as well as Lidar-Camera calibration using a planar checkerboard in a single frame. In general, through results, we verify how efficiently our approach of using correntropy can be used in tackling non-Gaussian noise/shot noise/impulse noise in robotics applications
    • …
    corecore