426 research outputs found

    Infrastructure Wi-Fi for connected autonomous vehicle positioning : a review of the state-of-the-art

    Get PDF
    In order to realize intelligent vehicular transport networks and self driving cars, connected autonomous vehicles (CAVs) are required to be able to estimate their position to the nearest centimeter. Traditional positioning in CAVs is realized by using a global navigation satellite system (GNSS) such as global positioning system (GPS) or by fusing weighted location parameters from a GNSS with an inertial navigation systems (INSs). In urban environments where Wi-Fi coverage is ubiquitous and GNSS signals experience signal blockage, multipath or non line-of-sight (NLOS) propagation, enterprise or carrier-grade Wi-Fi networks can be opportunistically used for localization or “fused” with GNSS to improve the localization accuracy and precision. While GNSS-free localization systems are in the literature, a survey of vehicle localization from the perspective of a Wi-Fi anchor/infrastructure is limited. Consequently, this review seeks to investigate recent technological advances relating to positioning techniques between an ego vehicle and a vehicular network infrastructure. Also discussed in this paper is an analysis of the location accuracy, complexity and applicability of surveyed literature with respect to intelligent transportation system requirements for CAVs. It is envisaged that hybrid vehicular localization systems will enable pervasive localization services for CAVs as they travel through urban canyons, dense foliage or multi-story car parks

    Localisation and tracking of people using distributed UWB sensors

    Get PDF
    In vielen Überwachungs- und Rettungsszenarien ist die Lokalisierung und Verfolgung von Personen in Innenräumen auf nichtkooperative Weise erforderlich. Für die Erkennung von Objekten durch Wände in kurzer bis mittlerer Entfernung, ist die Ultrabreitband (UWB) Radartechnologie aufgrund ihrer hohen zeitlichen Auflösung und Durchdringungsfähigkeit Erfolg versprechend. In dieser Arbeit wird ein Prozess vorgestellt, mit dem Personen in Innenräumen mittels UWB-Sensoren lokalisiert werden können. Er umfasst neben der Erfassung von Messdaten, Abstandschätzungen und dem Erkennen von Mehrfachzielen auch deren Ortung und Verfolgung. Aufgrund der schwachen Reflektion von Personen im Vergleich zum Rest der Umgebung, wird zur Personenerkennung zuerst eine Hintergrundsubtraktionsmethode verwendet. Danach wird eine konstante Falschalarmrate Methode zur Detektion und Abstandschätzung von Personen angewendet. Für Mehrfachziellokalisierung mit einem UWB-Sensor wird eine Assoziationsmethode entwickelt, um die Schätzungen des Zielabstandes den richtigen Zielen zuzuordnen. In Szenarien mit mehreren Zielen kann es vorkommen, dass ein näher zum Sensor positioniertes Ziel ein anderes abschattet. Ein Konzept für ein verteiltes UWB-Sensornetzwerk wird vorgestellt, in dem sich das Sichtfeld des Systems durch die Verwendung mehrerer Sensoren mit unterschiedlichen Blickfeldern erweitert lässt. Hierbei wurde ein Prototyp entwickelt, der durch Fusion von Sensordaten die Verfolgung von Mehrfachzielen in Echtzeit ermöglicht. Dabei spielen insbesondere auch Synchronisierungs- und Kooperationsaspekte eine entscheidende Rolle. Sensordaten können durch Zeitversatz und systematische Fehler gestört sein. Falschmessungen und Rauschen in den Messungen beeinflussen die Genauigkeit der Schätzergebnisse. Weitere Erkenntnisse über die Zielzustände können durch die Nutzung zeitlicher Informationen gewonnen werden. Ein Mehrfachzielverfolgungssystem wird auf der Grundlage des Wahrscheinlichkeitshypothesenfilters (Probability Hypothesis Density Filter) entwickelt, und die Unterschiede in der Systemleistung werden bezüglich der von den Sensoren ausgegebene Informationen, d.h. die Fusion von Ortungsinformationen und die Fusion von Abstandsinformationen, untersucht. Die Information, dass ein Ziel detektiert werden sollte, wenn es aufgrund von Abschattungen durch andere Ziele im Szenario nicht erkannt wurde, wird als dynamische Überdeckungswahrscheinlichkeit beschrieben. Die dynamische Überdeckungswahrscheinlichkeit wird in das Verfolgungssystem integriert, wodurch weniger Sensoren verwendet werden können, während gleichzeitig die Performanz des Schätzers in diesem Szenario verbessert wird. Bei der Methodenauswahl und -entwicklung wurde die Anforderung einer Echtzeitanwendung bei unbekannten Szenarien berücksichtigt. Jeder untersuchte Aspekt der Mehrpersonenlokalisierung wurde im Rahmen dieser Arbeit mit Hilfe von Simulationen und Messungen in einer realistischen Umgebung mit UWB Sensoren verifiziert.Indoor localisation and tracking of people in non-cooperative manner is important in many surveillance and rescue applications. Ultra wideband (UWB) radar technology is promising for through-wall detection of objects in short to medium distances due to its high temporal resolution and penetration capability. This thesis tackles the problem of localisation of people in indoor scenarios using UWB sensors. It follows the process from measurement acquisition, multiple target detection and range estimation to multiple target localisation and tracking. Due to the weak reflection of people compared to the rest of the environment, a background subtraction method is initially used for the detection of people. Subsequently, a constant false alarm rate method is applied for detection and range estimation of multiple persons. For multiple target localisation using a single UWB sensor, an association method is developed to assign target range estimates to the correct targets. In the presence of multiple targets it can happen that targets closer to the sensor induce shadowing over the environment hindering the detection of other targets. A concept for a distributed UWB sensor network is presented aiming at extending the field of view of the system by using several sensors with different fields of view. A real-time operational prototype has been developed taking into consideration sensor cooperation and synchronisation aspects, as well as fusion of the information provided by all sensors. Sensor data may be erroneous due to sensor bias and time offset. Incorrect measurements and measurement noise influence the accuracy of the estimation results. Additional insight of the targets states can be gained by exploiting temporal information. A multiple person tracking framework is developed based on the probability hypothesis density filter, and the differences in system performance are highlighted with respect to the information provided by the sensors i.e. location information fusion vs range information fusion. The information that a target should have been detected when it is not due to shadowing induced by other targets is described as dynamic occlusion probability. The dynamic occlusion probability is incorporated into the tracking framework, allowing fewer sensors to be used while improving the tracker performance in the scenario. The method selection and development has taken into consideration real-time application requirements for unknown scenarios at every step. Each investigated aspect of multiple person localization within the scope of this thesis has been verified using simulations and measurements in a realistic environment using M-sequence UWB sensors

    Robust state estimation methods for robotics applications

    Get PDF
    State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem

    DGNSS-Vision Integration for Robust and Accurate Relative Spacecraft Navigation

    Get PDF
    Relative spacecraft navigation based on Global Navigation Satellite System (GNSS) has been already successfully performed in low earth orbit (LEO). Very high accuracy, of the order of the millimeter, has been achieved in postprocessing using carrier phase differential GNSS (CDGNSS) and recovering the integer number of wavelength (Ambiguity) between the GNSS transmitters and the receiver. However the performance achievable on-board, in real time, above LEO and the GNSS constellation would be significantly lower due to limited computational resources, weaker signals, and worse geometric dilution of precision (GDOP). At the same time, monocular vision provides lower accuracy than CDGNSS when there is significant spacecraft separation, and it becomes even lower for larger baselines and wider field of views (FOVs). In order to increase the robustness, continuity, and accuracy of a real-time on-board GNSS-based relative navigation solution in a GNSS degraded environment such as Geosynchronous and High Earth Orbits, we propose a novel navigation architecture based on a tight fusion of carrier phase GNSS observations and monocular vision-based measurements, which enables fast autonomous relative pose estimation of cooperative spacecraft also in case of high GDOP and low GNSS visibility, where the GNSS signals are degraded, weak, or cannot be tracked continuously. In this paper we describe the architecture and implementation of a multi-sensor navigation solution and validate the proposed method in simulation. We use a dataset of images synthetically generated according to a chaser/target relative motion in Geostationary Earth Orbit (GEO) and realistic carrier phase and code-based GNSS observations simulated at the receiver position in the same orbits. We demonstrate that our fusion solution provides higher accuracy, higher robustness, and faster ambiguity resolution in case of degraded GNSS signal conditions, even when using high FOV cameras

    Cooperative Vehicle Perception and Localization Using Infrastructure-based Sensor Nodes

    Get PDF
    Reliable and accurate Perception and Localization (PL) are necessary for safe intelligent transportation systems. The current vehicle-based PL techniques in autonomous vehicles are vulnerable to occlusion and cluttering, especially in busy urban driving causing safety concerns. In order to avoid such safety issues, researchers study infrastructure-based PL techniques to augment vehicle sensory systems. Infrastructure-based PL methods rely on sensor nodes that each could include camera(s), Lidar(s), radar(s), and computation and communication units for processing and transmitting the data. Vehicle to Infrastructure (V2I) communication is used to access the sensor node processed data to be fused with the onboard sensor data. In infrastructure-based PL, signal-based techniques- in which sensors like Lidar are used- can provide accurate positioning information while vision-based techniques can be used for classification. Therefore, in order to take advantage of both approaches, cameras are cooperatively used with Lidar in the infrastructure sensor node (ISN) in this thesis. ISNs have a wider field of view (FOV) and are less likely to suffer from occlusion. Besides, they can provide more accurate measurements since they are fixed at a known location. As such, the fusion of both onboard and ISN data has the potential to improve the overall PL accuracy and reliability. This thesis presents a framework for cooperative PL in autonomous vehicles (AVs) by fusing ISN data with onboard sensor data. The ISN includes cameras and Lidar sensors, and the proposed camera Lidar fusion method combines the sensor node information with vehicle motion models and kinematic constraints to improve the performance of PL. One of the main goals of this thesis is to develop a wind induced motion compensation module to address the problem of time-varying extrinsic parameters of the ISNs. The proposed module compensates for the effect of the motion of ISN posts due to wind or other external disturbances. To address this issue, an unknown input observer is developed that uses the motion model of the light post as well as the sensor data. The outputs of the ISN, the positions of all objects in the FOV, are then broadcast so that autonomous vehicles can access the information via V2I connectivity to fuse with their onboard sensory data through the proposed cooperative PL framework. In the developed framework, a KCF is implemented as a distributed fusion method to fuse ISN data with onboard data. The introduced cooperative PL incorporates the range-dependent accuracy of the ISN measurements into fusion to improve the overall PL accuracy and reliability in different scenarios. The results show that using ISN data in addition to onboard sensor data improves the performance and reliability of PL in different scenarios, specifically in occlusion cases

    Application of computer vision for roller operation management

    Get PDF
    Compaction is the last and possibly the most important phase in construction of asphalt concrete (AC) pavements. Compaction densifies the loose (AC) mat, producing a stable surface with low permeability. The process strongly affects the AC performance properties. Too much compaction may cause aggregate degradation and low air void content facilitating bleeding and rutting. On the other hand too little compaction may result in higher air void content facilitating oxidation and water permeability issues, rutting due to further densification by traffic and reduced fatigue life. Therefore, compaction is a critical issue in AC pavement construction.;The common practice for compacting a mat is to establish a roller pattern that determines the number of passes and coverages needed to achieve the desired density. Once the pattern is established, the roller\u27s operator must maintain the roller pattern uniformly over the entire mat.;Despite the importance of uniform compaction to achieve the expected durability and performance of AC pavements, having the roller operator as the only mean to manage the operation can involve human errors.;With the advancement of technology in recent years, the concept of intelligent compaction (IC) was developed to assist the roller operators and improve the construction quality. Commercial IC packages for construction rollers are available from different manufacturers. They can provide precise mapping of a roller\u27s location and provide the roller operator with feedback during the compaction process.;Although, the IC packages are able to track the roller passes with impressive results, there are also major hindrances. The high cost of acquisition and potential negative impact on productivity has inhibited implementation of IC.;This study applied computer vision technology to build a versatile and affordable system to count and map roller passes. An infrared camera is mounted on top of the roller to capture the operator view. Then, in a near real-time process, image features were extracted and tracked to estimate the incremental rotation and translation of the roller. Image featured are categorized into near and distant features based on the user defined horizon. The optical flow is estimated for near features located in the region below the horizon. The change in roller\u27s heading is constantly estimated from the distant features located in the sky region. Using the roller\u27s rotation angle, the incremental translation between two frames will be calculated from the optical flow. The roller\u27s incremental rotation and translation will put together to develop a tracking map.;During system development, it was noted that in environments with thermal uniformity, the background of the IR images exhibit less featured as compared to images captured with optical cameras which are insensitive to temperature. This issue is more significant overnight, since nature elements are not able to reflect the heat energy from sun. Therefore to improve roller\u27s heading estimation where less features are available in the sky region a unique methodology that allows heading detection based on the asphalt mat edges was developed for this research. The heading measurements based on the slope of the asphalt hot edges will be added to the pool of the headings measured from sky region. The median of all heading measurements will be used as the incremental roller\u27s rotation for the tracking analysis.;The record of tracking data is used for QC/QA purposes and verifying the proper implementation of the roller pattern throughout a job constructed under the roller pass specifications.;The system developed during this research was successful in mapping roller location for few projects tested. However the system should be independently validated

    Accurate navigation applied to landing maneuvers on mobile platforms for unmanned aerial vehicles

    Get PDF
    Drones are quickly developing worldwide and in Europe in particular. They represent the future of a high percentage of operations that are currently carried out by manned aviation or satellites. Compared to fixed-wing UAVs, rotary wing UAVs have as advantages the hovering, agile maneuvering and vertical take-off and landing capabilities, so that they are currently the most used aerial robotic platforms. In operations from ships and boats, the final approach and the landing maneuver are the phases of the operation that involves a higher risk and where it is required a higher level of precision in the position and velocity estimation, along with a high level of robustness in the operation. In the framework of the EC-SAFEMOBIL and the REAL projects, this thesis is devoted to the development of a guidance and navigation system that allows completing an autonomous mission from the take-off to the landing phase of a rotary-wing UAV (RUAV). More specifically, this thesis is focused on the development of new strategies and algorithms that provide sufficiently accurate motion estimation during the autonomous landing on mobile platforms without using the GNSS constellations. In one hand, for the phases of the flights where it is not required a centimetric accuracy solution, here it is proposed a new navigation approach that extends the current estimation techniques by using the EGNOS integrity information in the sensor fusion filter. This approach allows improving the accuracy of the estimation solution and the safety of the overall system, and also helps the remote pilot to have a more complete awareness of the operation status while flying the UAV In the other hand, for those flight phases where the accuracy is a critical factor in the safety of the operation, this thesis presents a precise navigation system that allows rotary-wing UAVs to approach and land safely on moving platforms, without using GNSS at any stage of the landing maneuver, and with a centimeter-level accuracy and high level of robustness. This system implements a novel concept where the relative position and velocity between the aerial vehicle and the landing platform can be calculated from a radio-beacon system installed in both the UAV and the landing platform or through the angles of a cable that physically connects the UAV and the landing platform. The use of a cable also incorporates several extra benefits, like increasing the precision in the control of the UAV altitude. It also facilitates to center the UAV right on top of the expected landing position and increases the stability of the UAV just after contacting the landing platform. The proposed guidance and navigation systems have been implemented in an unmanned rotorcraft and a large number of tests have been carried out under different conditions for measuring the accuracy and the robustness of the proposed solution. Results showed that the developed system allows landing with centimeter accuracy by using only local sensors and that the UAV is able to follow a mobile landing platform in multiple trajectories at different velocities

    Task-Driven Integrity Assessment and Control for Vehicular Hybrid Localization Systems

    Get PDF
    Throughout the last decade, vehicle localization has been attracting significant attention in a wide range of applications, including Navigation Systems, Road Tolling, Smart Parking, and Collision Avoidance. To deliver on their requirements, these applications need specific localization accuracy. However, current localization techniques lack the required accuracy, especially for mission critical applications. Although various approaches for improving localization accuracy have been reported in the literature, there is still a need for more efficient and more effective measures that can ascribe some level of accuracy to the localization process. These measures will enable localization systems to manage the localization process and resources so as to achieve the highest accuracy possible, and to mitigate the impact of inadequate accuracy on the target application. In this thesis, a framework for fusing different localization techniques is introduced in order to estimate the location of a vehicle along with location integrity assessment that captures the impact of the measurement conditions on the localization quality. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The framework introduced provides the tools that would allow for modeling the impact of the operation conditions on estimate accuracy and integrity, as such it enables more robust system performance in three steps. First, localization system parameters are utilized to contrive a feature space that constitutes probable accuracy classes. Due to the strong overlap among accuracy classes in the feature space, a hierarchical classification strategy is developed to address the class ambiguity problem via the class unfolding approach (HCCU). HCCU strategy is proven to be superior with respect to other hierarchical configuration. Furthermore, a Context Based Accuracy Classification (CBAC) algorithm is introduced to enhance the performance of the classification process. In this algorithm, knowledge about the surrounding environment is utilized to optimize classification performance as a function of the observation conditions. Second, a task-driven integrity (TDI) model is developed to enable the applications modules to be aware of the trust level of the localization output. Typically, this trust level functions in the measurement conditions; therefore, the TDI model monitors specific parameter(s) in the localization technique and, accordingly, infers the impact of the change in the environmental conditions on the quality of the localization process. A generalized TDI solution is also introduced to handle the cases where sufficient information about the sensing parameters is unavailable. Finally, the produce of the employed localization techniques (i.e., location estimates, accuracy, and integrity level assessment) needs to be fused. Nevertheless, these techniques are hybrid and their pieces of information are conflicting in many situations. Therefore, a novel evidence structure model called Spatial Evidence Structure Model (SESM) is developed and used in constructing a frame of discernment comprising discretized spatial data. SESM-based fusion paradigms are capable of performing a fusion process using the information provided by the techniques employed. Both the location estimate accuracy and aggregated integrity resultant from the fusion process demonstrate superiority over the employing localization techniques. Furthermore, a context aware task-driven resource allocation mechanism is developed to manage the fusion process. The main objective of this mechanism is to optimize the usage of system resources and achieve a task-driven performance. Extensive experimental work is conducted on real-life and simulated data to validate models developed in this thesis. It is evident from the experimental results that task-driven integrity assessment and control is applicable and effective on hybrid localization systems

    Cooperative methods for vehicle localization

    Get PDF
    Abstract : Embedded intelligence in vehicular applications is becoming of great interest since the last two decades. Position estimation has been one of the most crucial pieces of information for Intelligent Transportation Systems (ITS). Real time, accurate and reliable localization of vehicles has become particularly important for the automotive industry. The significant growth of sensing, communication and computing capabilities over the recent years has opened new fields of applications, such as ADAS (Advanced driver assistance systems) and active safety systems, and has brought the ability of exchanging information between vehicles. Most of these applications can benefit from more accurate and reliable localization. With the recent emergence of multi-vehicular wireless communication capabilities, cooperative architectures have become an attractive alternative to solving the localization problem. The main goal of cooperative localization is to exploit different sources of information coming from different vehicles within a short range area, in order to enhance positioning system efficiency, while keeping the cost to a reasonable level. In this Thesis, we aim to propose new and effective methods to improve vehicle localization performance by using cooperative approaches. In order to reach this goal, three new methods for cooperative vehicle localization have been proposed and the performance of these methods has been analyzed. Our first proposed cooperative method is a Cooperative Map Matching (CMM) method which aims to estimate and compensate the common error component of the GPS positioning by using cooperative approach and exploiting the communication capability of the vehicles. Then we propose the concept of Dynamic base station DGPS (DDGPS) and use it to generate GPS pseudorange corrections and broadcast them for other vehicles. Finally we introduce a cooperative method for improving the GPS positioning by incorporating the GPS measured position of the vehicles and inter-vehicle distances. This method is a decentralized cooperative positioning method based on Bayesian approach. The detailed derivation of the equations and the simulation results of each algorithm are described in the designated chapters. In addition to it, the sensitivity of the methods to different parameters is also studied and discussed. Finally in order to validate the results of the simulations, experimental validation of the CMM method based on the experimental data captured by the test vehicles is performed and studied. The simulation and experimental results show that using cooperative approaches can significantly increase the performance of the positioning methods while keeping the cost to a reasonable amount.Résumé : L’intelligence embarquée dans les applications véhiculaires devient un grand intérêt depuis les deux dernières décennies. L’estimation de position a été l'une des parties les plus cruciales concernant les systèmes de transport intelligents (STI). La localisation précise et fiable en temps réel des véhicules est devenue particulièrement importante pour l'industrie automobile. Les améliorations technologiques significatives en matière de capteurs, de communication et de calcul embarqué au cours des dernières années ont ouvert de nouveaux champs d'applications, tels que les systèmes de sécurité active ou les ADAS, et a aussi apporté la possibilité d'échanger des informations entre les véhicules. Une localisation plus précise et fiable serait un bénéfice pour ces applications. Avec l'émergence récente des capacités de communication sans fil multi-véhicules, les architectures coopératives sont devenues une alternative intéressante pour résoudre le problème de localisation. L'objectif principal de la localisation coopérative est d'exploiter différentes sources d'information provenant de différents véhicules dans une zone de courte portée, afin d'améliorer l'efficacité du système de positionnement, tout en gardant le coût à un niveau raisonnable. Dans cette thèse, nous nous efforçons de proposer des méthodes nouvelles et efficaces pour améliorer les performances de localisation du véhicule en utilisant des approches coopératives. Afin d'atteindre cet objectif, trois nouvelles méthodes de localisation coopérative du véhicule ont été proposées et la performance de ces méthodes a été analysée. Notre première méthode coopérative est une méthode de correspondance cartographique coopérative (CMM, Cooperative Map Matching) qui vise à estimer et à compenser la composante d'erreur commune du positionnement GPS en utilisant une approche coopérative et en exploitant les capacités de communication des véhicules. Ensuite, nous proposons le concept de station de base Dynamique DGPS (DDGPS) et l'utilisons pour générer des corrections de pseudo-distance GPS et les diffuser aux autres véhicules. Enfin, nous présentons une méthode coopérative pour améliorer le positionnement GPS en utilisant à la fois les positions GPS des véhicules et les distances inter-véhiculaires mesurées. Ceci est une méthode de positionnement coopératif décentralisé basé sur une approche bayésienne. La description détaillée des équations et les résultats de simulation de chaque algorithme sont décrits dans les chapitres désignés. En plus de cela, la sensibilité des méthodes aux différents paramètres est également étudiée et discutée. Enfin, les résultats de simulations concernant la méthode CMM ont pu être validés à l’aide de données expérimentales enregistrées par des véhicules d'essai. La simulation et les résultats expérimentaux montrent que l'utilisation des approches coopératives peut augmenter de manière significative la performance des méthodes de positionnement tout en gardant le coût à un montant raisonnable
    corecore