113 research outputs found

    Robot Localization in an Agricultural Environment

    Get PDF
    Localization and Mapping of autonomous robots in an harsh and unstable environment such as a steep slope vineyard is a challenging research topic. The commonly used Dead Reckoning systems can fail due to the harsh conditions of the terrain and the accurate Global Position System can be considerably noisy or not always available. Agriculture is moving towards a precision agriculture, with advanced monitoring systems and wireless sensors networks. These systems and wireless sensors are installed in the crop field and can be considered relevant landmarks for robot localization using different types of technologies.In this work the performance of Pozyx, a low cost Time-of-flight system with Ultra-Wide Bandwidth (UWB) technology, is studied and implemented on a real robot range-based localization system. Firstly the error of both the range-only system and the embedded localization algorithm of the sensor is characterized. Then the range measurements are filtered with an EKF algorithm to output the robot pose and finally compared with the localization algorithm of the sensor.The obtained results are presented and compared with previous works showing an increased redundancy of the robot localization estimation. The UWB is proved to offer a good solution for a harsh environment as the agricultural one since its range-measurements are not much impacted by the atmospheric conditions. The discussion also allows to present formulations for better results of Beacons Mapping Procedure (BMP) required for accurate and reliable localization systems

    Robot Localization in Tunnel-like Environments.

    Get PDF
    Los entornos confinados como tuberías, túneles o minas constituyen infraestructuras clave para el desarrollo de las economías de los diferentes países. La existencia de estas infraestructuras conlleva la necesidad de llevar a cabo una serie de tareas de mantenimiento mediante inspecciones regulares para asegurar la integridad estructural de las mismas. Así mismo, existen otras tareas que se tienen que realizar en estos entornos como pueden ser misiones de rescate en caso de accidentes e incluso las propias tareas derivadas de la construcción de los mismos. La duras condiciones de este tipo de entornos, ausencia de luz, polvo, presencia de fluidos e incluso de sustancias tóxicas, hace que la ejecución de las mismas suponga un trabajo tedioso e incluso peligroso para las personas. Todo esto, unido a los continuos avances en las tecnologías robóticas, hacen que los robots sean los dispositivos más adecuados para la realización de estas tareas.Para que un robot pueda desempeñar su cometido de manera autónoma, es fundamental que pueda localizarse de manera precisa, no sólo para poder decidir las acciones a llevar a cabo sino también para poder ubicar de manera inequívoca los posibles daños que se puedan detectar durante las labores de inspección. El problema de la localización ha sido ampliamente estudiado en el mundo de la robótica, existiendo multitud de soluciones tanto para interiores como para exteriores mediante el uso de diferentes sensores y tecnologías. Sin embargo, los entornos tipo túnel presentan una serie de características específicas que hacen que la tarea de localización se convierta en todo un reto. La ausencia de iluminación y de características distinguibles tanto visuales como estructurales, hacen que los métodos tradicionales de localización basados en sensores láser y cámaras no funcionen correctamente. Además, al tratarse de entornos confinados, no es posible utilizar sensores típicos de exteriores como es el caso del GPS. La presencia de fluidos e incluso de superficies irregulares hacen poco fiables los métodos basados en odometría utilizando encoders en las ruedas del robot.Por otra parte, estos entornos presentan un comportamiento peculiar en lo que a la propagación de la señal de radiofrecuencia se refiere. Por un lado, a determinadas frecuencias, se comportan como guías de onda extendiendo el alcance de la comunicación, pero por otro, la señal radio sufre fuertes desvanecimientos o fadings. Trabajos previos han demostrado que es posible obtener fadings periódicos bajo una configuración determinada.Partiendo de estos estudios, en esta tesis se aborda el problema de la localización en tuberías y túneles reaprovechando esta naturaleza periódica de la señal radio. Inicialmente, se propone un método de localización para tuberías metálicas basado en técnicas probabilísticas, utilizando el modelo de propagación de la señal como un mapa de radiofrecuencia. Posteriormente, se aborda la localización en túneles siguiendo una estrategia similar de reaprovechar la naturaleza periódica de la señal y se presenta un método de localización discreta. Yendo un paso más allá, y con el objetivo de mejorar la localización a lo largo del túnel incluyendo otras fuentes de información, se desarrolla un método inspirado en el paradigma del graph-SLAM donde se incorporan los resultados obtenidos de la detección de características discretas proporcionadas por el propio túnel.Para ello, se implementa un sistema de detección que proporciona la posición absoluta de características relevantes de la señal periódica radio. Del mismo modo, se desarrolla un método de detección de características estructurales del túnel (galerías) que devuelve la posición conocida de las mismas. Todos estos resultados se incorporan al grafo como fuentes de información.Los métodos de localización desarrollados a lo largo de la tesis han sido validados con datos recolectados durante experimentos llevados a cabo con plataformas robóticas en escenarios reales: la tubería de Santa Ana en Castillonroy y el túnel ferroviario de Somport.<br /

    Fail-Safe Vehicle Pose Estimation in Lane-Level Maps Using Pose Graph Optimization

    Get PDF
    Die hochgenaue Posenschätzung autonomer Fahrzeuge sowohl in HD-Karten als auch spurrelativ ist unerlässlich um eine sichere Fahrzeugführung zu gewährleisten. Für die Serienfertigung wird aus Kosten- und Platzgründen bewusst auf hochgenaue, teure Einzelsensorik verzichtet und stattdessen auf eine Vielzahl von Sensoren, die neben der Posenschätzung auch von anderen Modulen verwendet werden können, zurückgegriffen. Im Fokus dieser Arbeit steht die Unsicherheitsschätzung, Bewertung und Fusion dieser Sensordaten. Die Optimierung von Posengraphen zur Fusion von Sensordaten zeichnet sich, im Gegensatz zu klassischen Filterverfahren, wie Kalman oder Partikelfilter, durch seine Robustheit gegenüber Fehlmessungen und der Flexibilität in der Modellierung aus. Die Optimierung eines Posengraphen wurde erstmalig auf mobilen Roboterplattformen zur Lösung sogenannter SLAM-Probleme angewendet. Diese Verfahren wurden immer weiter entwickelt und im speziellen auch zur rein kamerabasierten Lokalisierung autonomer Fahrzeuge in 3D-Punktwolken erfolgreich emonstriert. Für die Entwicklung und Freigabe sicherheitsrelevanter Systeme nach ISO 26262 wird neben der Genauigkeit jedoch auch eine Aussage über die Qualität und Ausfallsicherheit dieser Systeme gefordert. Diese Arbeit befasst sich, neben der Schätzung der karten- und spurrelativen Pose, auch mit der Schätzung der Posenunsicherheit und der Integrität der Sensordaten zueinander. Auf Grundlage dieser Arbeit wird eine Abschätzung der Ausfallsicherheit des Lokalisierungsmoduls ermöglicht. Motiviert durch das Projekt Ko-HAF werden zur Lokalisierung in HD-Karten lediglich Spurmarkierungen verwendet. Die speichereffiziente Darstellung dieser Karten ermöglicht eine hochfrequente Aktualisierung der Karteninhalte durch eine Fahrzeugflotte. Der vorgestellte Ansatz wurde prototypisch auf einem Opel Insignia umgesetzt. Der Testträger wurde um eine Front- und Heckkamera sowie einen GNSS-Empfänger erweitert. Zunächst werden die Schätzung der karten-und spurrelativen Fahrzeugpose, der GNSS-Signalauswertung sowie der Bewegungsschätzung des Fahrzeugs vorgestellt. Durch einen Vergleich der Schätzungen zueinander werden die Unsicherheiten der einzelnen Module berechnet. Das Lokalisierungsproblem wird dann durch einen Optimierer gelöst. Mithilfe der berechneten Unsicherheiten wird in einem nachgelagerten Schritt eine Bewertung der einzelnen Module durchgeführt. Zur Bewertung des Ansatzes wurden sowohl hochdynamische Manöver auf einer Teststrecke als auch Fahrten auf öffentlichen Autobahnen ausgewertet

    Integrating GRU with a Kalman filter to enhance visual inertial odometry performance in complex environments

    Get PDF
    To enhance system reliability and mitigate the vulnerabilities of the Global Navigation Satellite Systems (GNSS), it is common to fuse the Inertial Measurement Unit (IMU) and visual sensors with the GNSS receiver in the navigation system design, effectively enabling compensations with absolute positions and reducing data gaps. To address the shortcomings of a traditional Kalman Filter (KF), such as sensor errors, an imperfect non-linear system model, and KF estimation errors, a GRU-aided ESKF architecture is proposed to enhance the positioning performance. This study conducts Failure Mode and Effect Analysis (FMEA) to prioritize and identify the potential faults in the urban environment, facilitating the design of improved fault-tolerant system architecture. The identified primary fault events are data association errors and navigation environment errors during fault conditions of feature mismatch, especially in the presence of multiple failure modes. A hybrid federated navigation system architecture is employed using a Gated Recurrent Unit (GRU) to predict state increments for updating the state vector in the Error Estate Kalman Filter (ESKF) measurement step. The proposed algorithm’s performance is evaluated in a simulation environment in MATLAB under multiple visually degraded conditions. Comparative results provide evidence that the GRU-aided ESKF outperforms standard ESKF and state-of-the-art solutions like VINS-Mono, End-to-End VIO, and Self-Supervised VIO, exhibiting accuracy improvement in complex environments in terms of root mean square errors (RMSEs) and maximum errors

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation

    Cooperative methods for vehicle localization

    Get PDF
    Abstract : Embedded intelligence in vehicular applications is becoming of great interest since the last two decades. Position estimation has been one of the most crucial pieces of information for Intelligent Transportation Systems (ITS). Real time, accurate and reliable localization of vehicles has become particularly important for the automotive industry. The significant growth of sensing, communication and computing capabilities over the recent years has opened new fields of applications, such as ADAS (Advanced driver assistance systems) and active safety systems, and has brought the ability of exchanging information between vehicles. Most of these applications can benefit from more accurate and reliable localization. With the recent emergence of multi-vehicular wireless communication capabilities, cooperative architectures have become an attractive alternative to solving the localization problem. The main goal of cooperative localization is to exploit different sources of information coming from different vehicles within a short range area, in order to enhance positioning system efficiency, while keeping the cost to a reasonable level. In this Thesis, we aim to propose new and effective methods to improve vehicle localization performance by using cooperative approaches. In order to reach this goal, three new methods for cooperative vehicle localization have been proposed and the performance of these methods has been analyzed. Our first proposed cooperative method is a Cooperative Map Matching (CMM) method which aims to estimate and compensate the common error component of the GPS positioning by using cooperative approach and exploiting the communication capability of the vehicles. Then we propose the concept of Dynamic base station DGPS (DDGPS) and use it to generate GPS pseudorange corrections and broadcast them for other vehicles. Finally we introduce a cooperative method for improving the GPS positioning by incorporating the GPS measured position of the vehicles and inter-vehicle distances. This method is a decentralized cooperative positioning method based on Bayesian approach. The detailed derivation of the equations and the simulation results of each algorithm are described in the designated chapters. In addition to it, the sensitivity of the methods to different parameters is also studied and discussed. Finally in order to validate the results of the simulations, experimental validation of the CMM method based on the experimental data captured by the test vehicles is performed and studied. The simulation and experimental results show that using cooperative approaches can significantly increase the performance of the positioning methods while keeping the cost to a reasonable amount.Résumé : L’intelligence embarquée dans les applications véhiculaires devient un grand intérêt depuis les deux dernières décennies. L’estimation de position a été l'une des parties les plus cruciales concernant les systèmes de transport intelligents (STI). La localisation précise et fiable en temps réel des véhicules est devenue particulièrement importante pour l'industrie automobile. Les améliorations technologiques significatives en matière de capteurs, de communication et de calcul embarqué au cours des dernières années ont ouvert de nouveaux champs d'applications, tels que les systèmes de sécurité active ou les ADAS, et a aussi apporté la possibilité d'échanger des informations entre les véhicules. Une localisation plus précise et fiable serait un bénéfice pour ces applications. Avec l'émergence récente des capacités de communication sans fil multi-véhicules, les architectures coopératives sont devenues une alternative intéressante pour résoudre le problème de localisation. L'objectif principal de la localisation coopérative est d'exploiter différentes sources d'information provenant de différents véhicules dans une zone de courte portée, afin d'améliorer l'efficacité du système de positionnement, tout en gardant le coût à un niveau raisonnable. Dans cette thèse, nous nous efforçons de proposer des méthodes nouvelles et efficaces pour améliorer les performances de localisation du véhicule en utilisant des approches coopératives. Afin d'atteindre cet objectif, trois nouvelles méthodes de localisation coopérative du véhicule ont été proposées et la performance de ces méthodes a été analysée. Notre première méthode coopérative est une méthode de correspondance cartographique coopérative (CMM, Cooperative Map Matching) qui vise à estimer et à compenser la composante d'erreur commune du positionnement GPS en utilisant une approche coopérative et en exploitant les capacités de communication des véhicules. Ensuite, nous proposons le concept de station de base Dynamique DGPS (DDGPS) et l'utilisons pour générer des corrections de pseudo-distance GPS et les diffuser aux autres véhicules. Enfin, nous présentons une méthode coopérative pour améliorer le positionnement GPS en utilisant à la fois les positions GPS des véhicules et les distances inter-véhiculaires mesurées. Ceci est une méthode de positionnement coopératif décentralisé basé sur une approche bayésienne. La description détaillée des équations et les résultats de simulation de chaque algorithme sont décrits dans les chapitres désignés. En plus de cela, la sensibilité des méthodes aux différents paramètres est également étudiée et discutée. Enfin, les résultats de simulations concernant la méthode CMM ont pu être validés à l’aide de données expérimentales enregistrées par des véhicules d'essai. La simulation et les résultats expérimentaux montrent que l'utilisation des approches coopératives peut augmenter de manière significative la performance des méthodes de positionnement tout en gardant le coût à un montant raisonnable

    Interlacing Self-Localization, Moving Object Tracking and Mapping for 3D Range Sensors

    Get PDF
    This work presents a solution for autonomous vehicles to detect arbitrary moving traffic participants and to precisely determine the motion of the vehicle. The solution is based on three-dimensional images captured with modern range sensors like e.g. high-resolution laser scanners. As result, objects are tracked and a detailed 3D model is built for each object and for the static environment. The performance is demonstrated in challenging urban environments that contain many different objects

    Recent Advances in Indoor Localization Systems and Technologies

    Get PDF
    Despite the enormous technical progress seen in the past few years, the maturity of indoor localization technologies has not yet reached the level of GNSS solutions. The 23 selected papers in this book present the recent advances and new developments in indoor localization systems and technologies, propose novel or improved methods with increased performance, provide insight into various aspects of quality control, and also introduce some unorthodox positioning methods

    On the Enhancement of the Localization of Autonomous Mobile Platforms

    Get PDF
    The focus of many industrial and research entities on achieving full robotic autonomy increased in the past few years. In order to achieve full robotic autonomy, a fundamental problem is the localization, which is the ability of a mobile platform to determine its position and orientation in the environment. In this thesis, several problems related to the localization of autonomous platforms are addressed, namely, visual odometry accuracy and robustness; uncertainty estimation in odometries; and accurate multi-sensor fusion-based localization. Beside localization, the control of mobile manipulators is also tackled in this thesis. First, a generic image processing pipeline is proposed which, when integrated with a feature-based Visual Odometry (VO), can enhance robustness, accuracy and reduce the accumulation of errors (drift) in the pose estimation. Afterwards, since odometries (e.g. wheel odometry, LiDAR odometry, or VO) suffer from drift errors due to integration, and because such errors need to be quantified in order to achieve accurate localization through multi-sensor fusion schemes (e.g. extended or unscented kalman filters). A covariance estimation algorithm is proposed, which estimates the uncertainty of odometry measurements using another sensor which does not rely on integration. Furthermore, optimization-based multi-sensor fusion techniques are known to achieve better localization results compared to filtering techniques, but with higher computational cost. Consequently, an efficient and generic multi-sensor fusion scheme, based on Moving Horizon Estimation (MHE), is developed. The proposed multi-sensor fusion scheme: is capable of operating with any number of sensors; and considers different sensors measurements rates, missing measurements, and outliers. Moreover, the proposed multi-sensor scheme is based on a multi-threading architecture, in order to reduce its computational cost, making it more feasible for practical applications. Finally, the main purpose of achieving accurate localization is navigation. Hence, the last part of this thesis focuses on developing a stabilization controller of a 10-DOF mobile manipulator based on Model Predictive Control (MPC). All of the aforementioned works are validated using numerical simulations; real data from: EU Long-term Dataset, KITTI Dataset, TUM Dataset; and/or experimental sequences using an omni-directional mobile robot. The results show the efficacy and importance of each part of the proposed work

    Selective combination of visual and thermal imaging for resilient localization in adverse conditions: Day and night, smoke and fire

    Get PDF
    Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions
    corecore