1,522 research outputs found

    AoA-aware Probabilistic Indoor Location Fingerprinting using Channel State Information

    Full text link
    With expeditious development of wireless communications, location fingerprinting (LF) has nurtured considerable indoor location based services (ILBSs) in the field of Internet of Things (IoT). For most pattern-matching based LF solutions, previous works either appeal to the simple received signal strength (RSS), which suffers from dramatic performance degradation due to sophisticated environmental dynamics, or rely on the fine-grained physical layer channel state information (CSI), whose intricate structure leads to an increased computational complexity. Meanwhile, the harsh indoor environment can also breed similar radio signatures among certain predefined reference points (RPs), which may be randomly distributed in the area of interest, thus mightily tampering the location mapping accuracy. To work out these dilemmas, during the offline site survey, we first adopt autoregressive (AR) modeling entropy of CSI amplitude as location fingerprint, which shares the structural simplicity of RSS while reserving the most location-specific statistical channel information. Moreover, an additional angle of arrival (AoA) fingerprint can be accurately retrieved from CSI phase through an enhanced subspace based algorithm, which serves to further eliminate the error-prone RP candidates. In the online phase, by exploiting both CSI amplitude and phase information, a novel bivariate kernel regression scheme is proposed to precisely infer the target's location. Results from extensive indoor experiments validate the superior localization performance of our proposed system over previous approaches

    Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera

    Get PDF
    The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV). Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform. Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future

    Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review

    Get PDF
    Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of these approaches. The arti

    A Review of Indoor Millimeter Wave Device-based Localization and Device-free Sensing Technologies and Applications

    Full text link
    The commercial availability of low-cost millimeter wave (mmWave) communication and radar devices is starting to improve the penetration of such technologies in consumer markets, paving the way for large-scale and dense deployments in fifth-generation (5G)-and-beyond as well as 6G networks. At the same time, pervasive mmWave access will enable device localization and device-free sensing with unprecedented accuracy, especially with respect to sub-6 GHz commercial-grade devices. This paper surveys the state of the art in device-based localization and device-free sensing using mmWave communication and radar devices, with a focus on indoor deployments. We first overview key concepts about mmWave signal propagation and system design. Then, we provide a detailed account of approaches and algorithms for localization and sensing enabled by mmWaves. We consider several dimensions in our analysis, including the main objectives, techniques, and performance of each work, whether each research reached some degree of implementation, and which hardware platforms were used for this purpose. We conclude by discussing that better algorithms for consumer-grade devices, data fusion methods for dense deployments, as well as an educated application of machine learning methods are promising, relevant and timely research directions.Comment: 43 pages, 13 figures. Accepted in IEEE Communications Surveys & Tutorials (IEEE COMST

    Development of high-precision snow mapping tools for Arctic environments

    Get PDF
    Le manteau neigeux varie grandement dans le temps et l’espace, il faut donc de nombreux points d’observation pour le décrire précisément et ponctuellement, ce qui permet de valider et d’améliorer la modélisation de la neige et les applications en télédétection. L’analyse traditionnelle par des coupes de neige dévoile des détails pointus sur l’état de la neige à un endroit et un moment précis, mais est une méthode chronophage à laquelle la distribution dans le temps et l’espace font défaut. À l’opposé sur la fourchette de la précision, on retrouve les solutions orbitales qui couvrent la surface de la Terre à intervalles réguliers, mais à plus faible résolution. Dans l’optique de recueillir efficacement des données spatiales sur la neige durant les campagnes de terrain, nous avons développé sur mesure un système d’aéronef télépiloté (RPAS) qui fournit des cartes d’épaisseur de neige pour quelques centaines de mètres carrés, selon la méthode Structure from motion (SfM). Notre RPAS peut voler dans des températures extrêmement froides, au contraire des autres systèmes sur le marché. Il atteint une résolution horizontale de 6 cm et un écart-type d’épaisseur de neige de 39 % sans végétation (48,5 % avec végétation). Comme la méthode SfM ne permet pas de distinguer les différentes couches de neige, j’ai développé un algorithme pour un radar à onde continue à modulation de fréquence (FM-CW) qui permet de distinguer les deux couches principales de neige que l’on retrouve régulièrement en Arctique : le givre de profondeur et la plaque à vent. Les distinguer est crucial puisque les caractéristiques différentes des couches de neige font varier la quantité d’eau disponible pour l’écosystème lors de la fonte. Selon les conditions sur place, le radar arrive à estimer l’épaisseur de neige selon un écart-type entre 13 et 39 %. vii Finalement, j’ai équipé le radar d’un système de géolocalisation à haute précision. Ainsi équipé, le radar a une marge d’erreur de géolocalisation d’en moyenne <5 cm. À partir de la mesure radar, on peut déduire la distance entre le haut et le bas du manteau neigeux. En plus de l’épaisseur de neige, on obtient également des points de données qui permettent d’interpoler un modèle d’élévation de la surface solide sous-jacente. J’ai utilisé la méthode de structure triangulaire (TIN) pour toutes les interpolations. Le système offre beaucoup de flexibilité puisqu’il peut être installé sur un RPAS ou une motoneige. Ces outils épaulent la modélisation du couvert neigeux en fournissant des données sur un secteur, plutôt que sur un seul point. Les données peuvent servir à entraîner et à valider les modèles. Ainsi améliorés, ils peuvent, par exemple, permettre de prédire la taille, le niveau de santé et les déplacements de populations d’ongulés, dont la survie dépend de la qualité de la neige. (Langlois et coll., 2017.) Au même titre que la validation de modèles de neige, les outils présentés permettent de comparer et de valider d’autres données de télédétection (par ex. satellites) et d’élargir notre champ de compréhension. Finalement, les cartes ainsi créées peuvent aider les écologistes à évaluer l’état d’un écosystème en leur donnant accès à une plus grande quantité d’information sur le manteau neigeux qu’avec les coupes de neige traditionnelles.Abstract: Snow is highly variable in time and space and thus many observation points are needed to describe the present state of the snowpack accurately. This description of the state of the snowpack is necessary to validate and improve snow modeling efforts and remote sensing applications. The traditional snowpit analysis delivers a highly detailed picture of the present state of the snow in a particular location but lacks the distribution in space and time as it is a time-consuming method. On the opposite end of the spatial scale are orbital solutions covering the surface of the Earth in regular intervals, but at the cost of a much lower resolution. To improve the ability to collect spatial snow data efficiently during a field campaign, we developed a custom-made, remotely piloted aircraft system (RPAS) to deliver snow depth maps over a few hundred square meters by using Structure-from-Motion (SfM). The RPAS is capable of flying in extremely low temperatures where no commercial solutions are available. The system achieves a horizontal resolution of 6 cm with snow depth RMSE of 39% without vegetation (48.5% with vegetation) As the SfM method does not distinguish between different snow layers, I developed an algorithm for a frequency modulated continuous wave (FMCW) radar that distinguishes between the two main snow layers that are found regularly in the Arctic: “Depth Hoar” and “Wind Slab”. The distinction is important as these characteristics allow to determine the amount of water stored in the snow that will be available for the ecosystem during the melt season. Depending on site conditions, the radar estimates the snow depth with an RMSE between 13% and 39%. v Finally, I equipped the radar with a high precision geolocation system. With this setup, the geolocation uncertainty of the radar on average < 5 cm. From the radar measurement, the distance to the top and the bottom of the snowpack can be extracted. In addition to snow depth, it also delivers data points to interpolate an elevation model of the underlying solid surface. I used the Triangular Irregular Network (TIN) method for any interpolation. The system can be mounted on RPAS and snowmobiles and thus delivers a lot of flexibility. These tools will assist snow modeling as they provide data from an area instead of a single point. The data can be used to force or validate the models. Improved models will help to predict the size, health, and movements of ungulate populations, as their survival depends on it (Langlois et al., 2017). Similar to the validation of snow models, the presented tools allow a comparison and validation of other remote sensing data (e.g. satellite) and improve the understanding limitations. Finally, the resulting maps can be used by ecologist to better asses the state of the ecosystem as they have a more complete picture of the snow cover on a larger scale that it could be achieved with traditional snowpits

    NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version
    • …
    corecore