1,621 research outputs found

    Systematic Literature Review of Role of Applied Geomatics in Mapping and Tracking Corona Virus

    Get PDF
    This review paper focuses on the Role of Applied Geomatics in Mapping of dispersion Corona Virus and sheds the light on the important studies on the topic. It also sheds the light on the tracking Corona Virus literature. This review paper also sheds the light on the definition, conceptualization, and measurement of corona virus mapping and tracking. This review paper has also showed a number of studies that linked the relationship between applied geomatics and the mapping and tracking corona virus. Authors explores the literature about applied geomatics, mapping and tracking from 2009 to end 2019 in order to investigate how these two geomatics techniques were born, how they have developed, which are the shared features and how it play an important role in corona virus the novel pandemic. This systematic review of current literature on applied geomatics and corona virus and provides insight into the initial and proposed framework of integrating geomatics to track and map the corona virus

    RGB-D Mapping and Tracking in a Plenoxel Radiance Field

    Full text link
    Building on the success of Neural Radiance Fields (NeRFs), recent years have seen significant advances in the domain of novel view synthesis. These models capture the scene's volumetric radiance field, creating highly convincing dense photorealistic models through the use of simple, differentiable rendering equations. Despite their popularity, these algorithms suffer from severe ambiguities in visual data inherent to the RGB sensor, which means that although images generated with view synthesis can visually appear very believable, the underlying 3D model will often be wrong. This considerably limits the usefulness of these models in practical applications like Robotics and Extended Reality (XR), where an accurate dense 3D reconstruction otherwise would be of significant value. In this technical report, we present the vital differences between view synthesis models and 3D reconstruction models. We also comment on why a depth sensor is essential for modeling accurate geometry in general outward-facing scenes using the current paradigm of novel view synthesis methods. Focusing on the structure-from-motion task, we practically demonstrate this need by extending the Plenoxel radiance field model: Presenting an analytical differential approach for dense mapping and tracking with radiance fields based on RGB-D data without a neural network. Our method achieves state-of-the-art results in both the mapping and tracking tasks while also being faster than competing neural network-based approaches.Comment: *The two authors contributed equally to this pape

    DynaQuadric: Dynamic Quadric SLAM for Quadric Initialization, Mapping, and Tracking

    Get PDF
    Dynamic SLAM is a key technology for autonomous driving and robotics, and accurate pose estimation of surrounding objects is important for semantic perception tasks. Current quadric SLAM methods are based on the assumption of a static environment and can only reconstruct static quadrics in the scene, which limits their applications in complex dynamic scenarios. In this paper, we propose a visual SLAM system that is capable of reconstructing dynamic objects as quadrics, with a unified framework for jointly optimizing pose estimation, multi-object tracking (MOT), and quadric parameters. We propose a robust object-centric quadric initialization algorithm for both static and moving objects, which decouples the prior estimation of the object pose from the quadric parameters. The object is initialized with a coarse sphere, and quadric parameters are further refined. We design a novel factor graph that tightly optimizes camera pose, object pose, map points and quadric parameters within the sliding window-based optimization. To the best of our knowledge, we are the first to propose a dynamic SLAM that combines quadric representations and MOT in a tightly coupled optimization. We perform qualitative and quantitative experiments on both simulated and real-world datasets, and demonstrate the robustness and accuracy in terms of camera localization, dynamic quadric initialization, mapping and tracking. Our system demonstrates the potential application of object perception with quadric representation in complex dynamic scenes

    LiDAR based multi-sensor fusion for localization, mapping, and tracking

    Get PDF
    Viimeisen vuosikymmenen aikana täysin itseohjautuvien ajoneuvojen kehitys on herättänyt laajaa kiinnostusta niin teollisuudessa kuin tiedemaailmassakin, mikä on merkittävästi edistänyt tilannetietoisuuden ja anturiteknologian kehitystä. Erityisesti LiDAR-anturit ovat nousseet keskeiseen rooliin monissa havainnointijärjestelmissä niiden tarjoaman pitkän kantaman havaintokyvyn, tarkan 3D-etäisyystiedon ja luotettavan suorituskyvyn ansiosta. LiDAR-teknologian kehittyminen on mahdollistanut entistä luotettavampien ja kustannustehokkaampien antureiden käytön, mikä puolestaan on osoittanut suurta potentiaalia parantaa laajasti käytettyjen kuluttajatuotteiden tilannetietoisuutta. Uusien LiDAR-antureiden hyödyntäminen tarjoaa tutkijoille monipuolisen valikoiman tehokkaita työkaluja, joiden avulla voidaan ratkaista paikannuksen, kartoituksen ja seurannan haasteita nykyisissä havaintojärjestelmissä. Tässä väitöskirjassa tutkitaan LiDAR-pohjaisia sensorifuusioalgoritmeja. Tutkimuksen pääpaino on tiheässä kartoituksessa ja globaalissa paikan-nuksessa erilaisten LiDAR-anturien avulla. Tutkimuksessa luodaan kattava tietokanta uusien LiDAR-, IMU- ja kamera-antureiden tuottamasta datasta. Tietokanta on välttämätön kehittyneiden anturifuusioalgoritmien ja yleiskäyttöisten paikannus- ja kartoitusalgoritmien kehittämiseksi. Tämän lisäksi väitöskirjassa esitellään innovatiivisia menetelmiä globaaliin paikannukseen erilaisissa ympäristöissä. Esitellyt menetelmät kartoituksen tarkkuuden ja tilannetietoisuuden parantamiseksi ovat muun muassa modulaarinen monen LiDAR-anturin odometria ja kartoitus, toimintavarma multimodaalinen LiDAR-inertiamittau-sjärjestelmä ja tiheä kartoituskehys. Tutkimus integroi myös kiinteät LiDAR -anturit kamerapohjaisiin syväoppimismenetelmiin kohteiden seurantaa varten parantaen kartoituksen tarkkuutta dynaamisissa ympäristöissä. Näiden edistysaskeleiden avulla autonomisten järjestelmien luotettavuutta ja tehokkuutta voidaan merkittävästi parantaa todellisissa käyttöympäristöissä. Väitöskirja alkaa esittelemällä innovatiiviset anturit ja tiedonkeruualustan. Tämän jälkeen esitellään avoin tietokanta, jonka avulla voidaan arvioida kehittyneitä paikannus- ja kartoitusalgoritmeja hyödyntäen ainutlaatuista perustotuuden kehittämismenetelmää. Työssä käsitellään myös kahta haastavaa paikannusympäristöä: metsä- ja kaupunkiympäristöä. Lisäksi tarkastellaan kohteen seurantatehtäviä sekä kameraettä LiDAR-tekniikoilla ihmisten ja pienten droonien seurannassa. ---------------------- The development of fully autonomous driving vehicles has become a key focus for both industry and academia over the past decade, fostering significant progress in situational awareness abilities and sensor technology. Among various types of sensors, the LiDAR sensor has emerged as a pivotal component in many perception systems due to its long-range detection capabilities, precise 3D range information, and reliable performance in diverse environments. With advancements in LiDAR technology, more reliable and cost-effective sensors have shown great potential for improving situational awareness abilities in widely used consumer products. By leveraging these novel LiDAR sensors, researchers now have a diverse set of powerful tools to effectively tackle the persistent challenges in localization, mapping, and tracking within existing perception systems. This thesis explores LiDAR-based sensor fusion algorithms to address perception challenges in autonomous systems, with a primary focus on dense mapping and global localization using diverse LiDAR sensors. The research involves the integration of novel LiDARs, IMU, and camera sensors to create a comprehensive dataset essential for developing advanced sensor fusion and general-purpose localization and mapping algorithms. Innovative methodologies for global localization across varied environments are introduced. These methodologies include a robust multi-modal LiDAR inertial odometry and a dense mapping framework, which enhance mapping precision and situational awareness. The study also integrates solid-state LiDARs with camera-based deep-learning techniques for object tracking, refining mapping accuracy in dynamic environments. These advancements significantly enhance the reliability and efficiency of autonomous systems in real-world scenarios. The thesis commences with an introduction to innovative sensors and a data collection platform. It proceeds by presenting an open-source dataset designed for the evaluation of advanced SLAM algorithms, utilizing a unique ground-truth generation method. Subsequently, the study tackles two localization challenges in forest and urban environments. Furthermore, it highlights the MM-LOAM dense mapping framework. Additionally, the research explores object-tracking tasks, employing both camera and LiDAR technologies for human and micro UAV tracking

    S2^2MAT: Simultaneous and Self-Reinforced Mapping and Tracking in Dynamic Urban Scenariosorcing Framework for Simultaneous Mapping and Tracking in Unbounded Urban Environments

    Full text link
    Despite the increasing prevalence of robots in daily life, their navigation capabilities are still limited to environments with prior knowledge, such as a global map. To fully unlock the potential of robots, it is crucial to enable them to navigate in large-scale unknown and changing unstructured scenarios. This requires the robot to construct an accurate static map in real-time as it explores, while filtering out moving objects to ensure mapping accuracy and, if possible, achieving high-quality pedestrian tracking and collision avoidance. While existing methods can achieve individual goals of spatial mapping or dynamic object detection and tracking, there has been limited research on effectively integrating these two tasks, which are actually coupled and reciprocal. In this work, we propose a solution called S2^2MAT (Simultaneous and Self-Reinforced Mapping and Tracking) that integrates a front-end dynamic object detection and tracking module with a back-end static mapping module. S2^2MAT leverages the close and reciprocal interplay between these two modules to efficiently and effectively solve the open problem of simultaneous tracking and mapping in highly dynamic scenarios. We conducted extensive experiments using widely-used datasets and simulations, providing both qualitative and quantitative results to demonstrate S2^2MAT's state-of-the-art performance in dynamic object detection, tracking, and high-quality static structure mapping. Additionally, we performed long-range robotic navigation in real-world urban scenarios spanning over 7 km, which included challenging obstacles like pedestrians and other traffic agents. The successful navigation provides a comprehensive test of S2^2MAT's robustness, scalability, efficiency, quality, and its ability to benefit autonomous robots in wild scenarios without pre-built maps.Comment: homepage: https://sites.google.com/view/smat-na

    RGBDTAM: A Cost-Effective and Accurate RGB-D Tracking and Mapping System

    Full text link
    Simultaneous Localization and Mapping using RGB-D cameras has been a fertile research topic in the latest decade, due to the suitability of such sensors for indoor robotics. In this paper we propose a direct RGB-D SLAM algorithm with state-of-the-art accuracy and robustness at a los cost. Our experiments in the RGB-D TUM dataset [34] effectively show a better accuracy and robustness in CPU real time than direct RGB-D SLAM systems that make use of the GPU. The key ingredients of our approach are mainly two. Firstly, the combination of a semi-dense photometric and dense geometric error for the pose tracking (see Figure 1), which we demonstrate to be the most accurate alternative. And secondly, a model of the multi-view constraints and their errors in the mapping and tracking threads, which adds extra information over other approaches. We release the open-source implementation of our approach 1 . The reader is referred to a video with our results 2 for a more illustrative visualization of its performance
    corecore