78 research outputs found

    A Review of Radio Frequency Based Localization for Aerial and Ground Robots with 5G Future Perspectives

    Full text link
    Efficient localization plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned aerial vehicles (UAVs), which would contribute to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities for enhancing localization of UAVs and UGVs. In this paper, we review the radio frequency (RF) based approaches for localization. We review the RF features that can be utilized for localization and investigate the current methods suitable for Unmanned vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localization for both UAVs and UGVs is examined, and the envisioned 5G NR for localization enhancement, and the future research direction are explored

    Anchor Self-Calibrating Schemes for UWB based Indoor Localization

    Get PDF
    Traditional indoor localization techniques that use Received Signal Strength or Inertial Measurement Units for dead-reckoning suffer from signal attenuation and sensor drift, resulting in inaccurate position estimates. Newly available Ultra-Wideband radio modules can measure distances at a centimeter-level accuracy while mitigating the effects of multipath propagation due to their very fine time resolution. Known locations of fixed anchor nodes are required to determine the position of tag nodes within an indoor environment. For a large system consisting of several anchor nodes spanning a wide area, physically mapping out the locations of each anchor node is a tedious task and thus makes the scalability of such systems difficult. Hence it is important to develop indoor localization systems wherein the anchors can self-calibrate by determining their relative positions in Euclidean 3D space with respect to each other. In this thesis, we propose two novel anchor self-calibrating algorithms - Triangle Reconstruction Algorithm (TRA) and Channel Impulse Response Positioning (CIRPos) that improve upon existing range-based implementations and solve existing problems such as flip ambiguity and node localization success rate. The localization accuracy and scalability of the self-calibrating anchor schemes are tested in a simulated environment based on the ranging accuracy of the Ultra-Wideband modules

    Underwater 3D positioning on smart devices

    Full text link
    The emergence of water-proof mobile and wearable devices (e.g., Garmin Descent and Apple Watch Ultra) designed for underwater activities like professional scuba diving, opens up opportunities for underwater networking and localization capabilities on these devices. Here, we present the first underwater acoustic positioning system for smart devices. Unlike conventional systems that use floating buoys as anchors at known locations, we design a system where a dive leader can compute the relative positions of all other divers, without any external infrastructure. Our intuition is that in a well-connected network of devices, if we compute the pairwise distances, we can determine the shape of the network topology. By incorporating orientation information about a single diver who is in the visual range of the leader device, we can then estimate the positions of all the remaining divers, even if they are not within sight. We address various practical problems including detecting erroneous distance estimates, addressing rotational and flipping ambiguities as well as designing a distributed timestamp protocol that scales linearly with the number of devices. Our evaluations show that our distributed system running on underwater deployments of 4-5 commodity smart devices can perform pairwise ranging and localization with median errors of 0.5-0.9 m and 0.9-1.6

    A Review of Radio Frequency Based Localisation for Aerial and Ground Robots with 5G Future Perspectives

    Get PDF
    Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities to enhance the localisation of UAVs and UGVs. In this paper, we review radio frequency (RF)-based approaches to localisation. We review the RF features that can be utilized for localisation and investigate the current methods suitable for Unmanned Vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localisation for both UAVs and UGVs is examined, and the envisioned 5G NR for localisation enhancement, and the future research direction are explored

    Range-only SLAM schemes exploiting robot-sensor network cooperation

    Get PDF
    Simultaneous localization and mapping (SLAM) is a key problem in robotics. A robot with no previous knowledge of the environment builds a map of this environment and localizes itself in that map. Range-only SLAM is a particularization of the SLAM problem which only uses the information provided by range sensors. This PhD Thesis describes the design, integration, evaluation and validation of a set of schemes for accurate and e_cient range-only simultaneous localization and mapping exploiting the cooperation between robots and sensor networks. This PhD Thesis proposes a general architecture for range-only simultaneous localization and mapping (RO-SLAM) with cooperation between robots and sensor networks. The adopted architecture has two main characteristics. First, it exploits the sensing, computational and communication capabilities of sensor network nodes. Both, the robot and the beacons actively participate in the execution of the RO-SLAM _lter. Second, it integrates not only robot-beacon measurements but also range measurements between two di_erent beacons, the so-called inter-beacon measurements. Most reported RO-SLAM methods are executed in a centralized manner in the robot. In these methods all tasks in RO-SLAM are executed in the robot, including measurement gathering, integration of measurements in RO-SLAM and the Prediction stage. These fully centralized RO-SLAM methods require high computational burden in the robot and have very poor scalability. This PhD Thesis proposes three di_erent schemes that works under the aforementioned architecture. These schemes exploit the advantages of cooperation between robots and sensor networks and intend to minimize the drawbacks of this cooperation. The _rst scheme proposed in this PhD Thesis is a RO-SLAM scheme with dynamically con_gurable measurement gathering. Integrating inter-beacon measurements in RO-SLAM signi_cantly improves map estimation but involves high consumption of resources, such as the energy required to gather and transmit measurements, the bandwidth required by the measurement collection protocol and the computational burden necessary to integrate the larger number of measurements. The objective of this scheme is to reduce the increment in resource consumption resulting from the integration of inter-beacon measurements by adopting a centralized mechanism running in the robot that adapts measurement gathering. The second scheme of this PhD Thesis consists in a distributed RO-SLAM scheme based on the Sparse Extended Information Filter (SEIF). This scheme reduces the increment in resource consumption resulting from the integration of inter-beacon measurements by adopting a distributed SLAM _lter in which each beacon is responsible for gathering its measurements to the robot and to other beacons and computing the SLAM Update stage in order to integrate its measurements in SLAM. Moreover, it inherits the scalability of the SEIF. The third scheme of this PhD Thesis is a resource-constrained RO-SLAM scheme based on the distributed SEIF previously presented. This scheme includes the two mechanisms developed in the previous contributions {measurement gathering control and distribution of RO-SLAM Update stage between beacons{ in order to reduce the increment in resource consumption resulting from the integration of inter-beacon measurements. This scheme exploits robot-beacon cooperation to improve SLAM accuracy and e_ciency while meeting a given resource consumption bound. The resource consumption bound is expressed in terms of the maximum number of measurements that can be integrated in SLAM per iteration. The sensing channel capacity used, the beacon energy consumed or the computational capacity employed, among others, are proportional to the number of measurements that are gathered and integrated in SLAM. The performance of the proposed schemes have been analyzed and compared with each other and with existing works. The proposed schemes are validated in real experiments with aerial robots. This PhD Thesis proves that the cooperation between robots and sensor networks provides many advantages to solve the RO-SLAM problem. Resource consumption is an important constraint in sensor networks. The proposed architecture allows the exploitation of the cooperation advantages. On the other hand, the proposed schemes give solutions to the resource limitation without degrading performance

    Sensor Fusion for Mobile Robot Localization using UWB and ArUco Markers

    Get PDF
    Uma das principais características para considerar um robô autónomo é o facto de este ser capaz de se localizar, em tempo real, no seu ambiente, ou seja saber a sua posição e orientação. Esta é uma área desafiante que tem sido estudada por diversos investigadores em todo o mundo. Para obter a localização de um robô é possível recorrer a diferentes metodologias. No entanto há metodologias que apresentam problemas em diferentes circunstâncias, como é o caso da odometria que sofre de acumulação de erros com a distância percorrida pelo robô. Outro problema existente em diversas metodologias é a incerteza na deteção do robô devido a ruído presente nos sensores. Com o intuito de obter uma localização mais robusta do robô e mais tolerante a falhas é possível combinar diversos sistemas de localização, combinando assim as vantagens de cada um deles. Neste trabalho, será utilizado o sistema Pozyx, uma solução de baixo custo que fornece informação de posicionamento com o auxílio da tecnologia Ultra-WideBand Time-of-Flight (UWB ToF). Também serão utilizados marcadores ArUco colocados no ambiente que através da sua identificação por uma câmara é também possível obter informação de posicionamento. Estas duas soluções irão ser estudadas e implementadas num robô móvel, através de um esquema de localização baseada em marcadores. Primeiramente, irá ser feita uma caracterização do erro de ambos os sistemas, uma vez que as medidas não são perfeitas, havendo sempre algum ruído nas medições. De seguida, as medidas fornecidas pelos sistemas irão ser filtradas e fundidas com os valores da odometria do robô através da implementação de um Filtro de Kalman Extendido (EKF). Assim, é possível obter a pose do robô (posição e orientação), pose esta que é comparada com a pose fornecida por um sistema de Ground-Truth igualmente desenvolvido para este trabalho com o auxílio da libraria ArUco, percebendo assim a precisão do algoritmo desenvolvido. O trabalho desenvolvido mostrou que com a utilização do sistema Pozyx e dos marcadores ArUco é possível melhorar a localização do robô, o que significa que é uma solução adequada e eficaz para este fim.One of the main characteristics to consider a robot truly autonomous is the fact that it is able to locate itself, in real time, in its environment, that is, to know its position and orientation. This is a challenging area that has been studied by several researchers around the world. To obtain the localization of a robot it is possible to use different methodologies. However, there are methodologies that present problems in different circumstances, as is the case of odometry that suffers from error accumulation with the distance traveled by the robot. Another problem existing in several methodologies is the uncertainty in the sensing of the robot due to noise present in the sensors. In order to obtain a more robust localization of the robot and more fault tolerant it is possible to combine several localization systems, thus combining the advantages of each one. In this work, the Pozyx system will be used, a low-cost solution that provides positioning information through Ultra-WideBand Time-of-Flight (UWB ToF) technology. It will also be used ArUco markers placed in the environment that through their identification by a camera it is also possible to obtain positioning information. These two solutions will be studied and implemented in a mobile robot, through a beacon-based localization scheme. First, an error characterization of both systems will be performed, since the measurements are not perfect, and there is always some noise in the measurements. Next, the measurements provided by the systems will be filtered and fused with the robot's odometry values by the implementation of an Extended Kalman Filter (EKF). In this way, it is possible to obtain the robot's pose, i.e position and orientation, which is compared with the pose provided by a Ground-Truth system also developed for this work with the aid of the ArUco library, thus realizing the accuracy of the developed algorithm. The developed work showed that with the use of the Pozyx system and ArUco markers it is possible to improve the robot localization, meaning that it is an adequate and effective solution for this purpose

    A multi-hypothesis approach for range-only simultaneous localization and mapping with aerial robots

    Get PDF
    Los sistemas de Range-only SLAM (o RO-SLAM) tienen como objetivo la construcción de un mapa formado por la posición de un conjunto de sensores de distancia y la localización simultánea del robot con respecto a dicho mapa, utilizando únicamente para ello medidas de distancia. Los sensores de distancia son dispositivos capaces de medir la distancia relativa entre cada par de dispositivos. Estos sensores son especialmente interesantes para su applicación a vehículos aéreos debido a su reducido tamaño y peso. Además, estos dispositivos son capaces de operar en interiores o zonas con carencia de señal GPS y no requieren de una línea de visión directa entre cada par de dispositivos a diferencia de otros sensores como cámaras o sensores laser, permitiendo así obtener una lectura de datos continuada sin oclusiones. Sin embargo, estos sensores presentan un modelo de observación no lineal con una deficiencia de rango debido a la carencia de información de orientación relativa entre cada par de sensores. Además, cuando se incrementa la dimensionalidad del problema de 2D a 3D para su aplicación a vehículos aéreos, el número de variables ocultas del modelo aumenta haciendo el problema más costoso computacionalmente especialmente ante implementaciones multi-hipótesis. Esta tesis estudia y propone diferentes métodos que permitan la aplicación eficiente de estos sistemas RO-SLAM con vehículos terrestres o aéreos en entornos reales. Para ello se estudia la escalabilidad del sistema en relación al número de variables ocultas y el número de dispositivos a posicionar en el mapa. A diferencia de otros métodos descritos en la literatura de RO-SLAM, los algoritmos propuestos en esta tesis tienen en cuenta las correlaciones existentes entre cada par de dispositivos especialmente para la integración de medidas estÃa˛ticas entre pares de sensores del mapa. Además, esta tesis estudia el ruido y las medidas espúreas que puedan generar los sensores de distancia para mejorar la robustez de los algoritmos propuestos con técnicas de detección y filtración. También se proponen métodos de integración de medidas de otros sensores como cámaras, altímetros o GPS para refinar las estimaciones realizadas por el sistema RO-SLAM. Otros capítulos estudian y proponen técnicas para la integración de los algoritmos RO-SLAM presentados a sistemas con múltiples robots, así como el uso de técnicas de percepción activa que permitan reducir la incertidumbre del sistema ante trayectorias con carencia de trilateración entre el robot y los sensores de destancia estáticos del mapa. Todos los métodos propuestos han sido validados mediante simulaciones y experimentos con sistemas reales detallados en esta tesis. Además, todos los sistemas software implementados, así como los conjuntos de datos registrados durante la experimentación han sido publicados y documentados para su uso en la comunidad científica

    Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning

    Get PDF
    Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome Parkfunktionalität in einem realen Versuchsträger umgesetzt. Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschließlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit. Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken über eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren Datensätze dieser Annotationsebene und Radarspezifikation öffentlich verfügbar. Das überwachte Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen. Außerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstützt. Für die kohärente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrückt. Ein speziell für Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM für beliebige statische Umgebungen realisiert. Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen Parkfunktionalität evaluiert. Im Durchschnitt über 42 autonome Parkmanöver (∅3.73 km/h) bei durchschnittlicher Manöverlänge von ∅172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare Radar-Lokalisierungsergebnisse um ≈ 50% übertrifft. Die Kartengenauigkeit von veränderlichen, neukartierten Orten über eine Kartierungsdistanz von ∅165m ergibt eine ≈ 56%-ige Kartenkonsistenz bei einer Abweichung von ∅0.163m. Für das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet

    Algorithmic Aspects of Communication and Localization in Wireless Sensor Networks

    Get PDF

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot
    corecore