380 research outputs found

    VSLAM and Navigation System of Unmanned Ground Vehicle Based on RGB-D Camera

    Get PDF
    In this thesis, ROS (Robot Operating System) is used as the software platform and a simple unmanned ground vehicle that is designed and constructed by myself is used as the hardware platform. The most critical issues in the navigation technology of unmanned ground vehicles in unknown environments -SLAM (Simultaneous Localization and Mapping) and autonomous navigation technology are studied. Through the analysis of the principle and structure of visual SLAM, a visual simultaneous localization and mapping algorithm is build. Moreover, accelerate the visual SLAM algorithm through hardware replacement and software algorithm optimization. RealSense D435 is used as the camera of the VSLAM sensor. The algorithm extracts the features from the data of depth camera and calculates the odometry information of the unmanned vehicle through the features matching of the adjacent image. Then update the vehicle’s location and map data using the odometry information. Under the condition that the visual SLAM algorithm works normally, this thesis also uses the 3D map generated to derive the real-time 2D projection map. So as to apply it to the navigation algorithm. Then this thesis realize autonomous navigation and avoids the obstacle function of unmanned vehicle by controlling the driving speed and direction of the vehicle through the navigation algorithm using the 2D projection map. Unmanned ground vehicle path planning is mainly two parts: local path planning and global path planning. Global path planning is mainly used to plan the optimal path to the destination. Local path planning is mainly used to control the speed and direction of the UGV. This thesis analyzes and compares Dijkstra’s algorithm and A* algorithm. Considering the compatible to ROS, Dijkstra’s algorithm is finally used as the global path-planning algorithm. DWA (Dynamic Window Approach) algorithm is used as Local path planning. Under the control of the Dijkstra’s algorithm and the DWA algorithm, unmanned ground vehicles can automatically plan the optimal path to the target point and avoid obstacles. This thesis also designed and constructed a simple unmanned ground vehicle as an experimental platform and design a simple control method basing on differential wheeled unmanned ground vehicle and finally realized the autonomous navigation of unmanned ground vehicles and the function of avoiding obstacles through visual SLAM algorithm and autonomous navigation algorithm. Finally, the main work and deficiencies of this thesis are summarized. And the prospects and difficulties of the research field of unmanned ground vehicles are presented

    SLAMBench2: Multi-Objective Head-to-Head Benchmarking for Visual SLAM

    Get PDF
    SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phonebased AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. SLAMBench2 is a benchmarking framework to evaluate existing and future SLAM systems, both open and close source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing SLAM algorithms and datasets is supported, e.g. ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS, and integrating new ones is straightforward and clearly specified by the framework. SLAMBench2 is a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs across SLAM systems

    A cooperative navigation system with distributed architecture for multiple unmanned aerial vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs) have been widely used in many applications due to, among other features, their versatility, reduced operating cost, and small size. These applications increasingly demand that features related to autonomous navigation be employed, such as mapping. However, the reduced capacity of resources such as, for example, battery and hardware (memory and processing units) can hinder the development of these applications in UAVs. Thus, the collaborative use of multiple UAVs for mapping can be used as an alternative to solve this problem, with a cooperative navigation system. This system requires that individual local maps be transmitted and merged into a global map in a distributed manner. In this scenario, there are two main problems to be addressed: the transmission of maps among the UAVs and the merging of the local maps in each UAV. In this context, this work describes the design, development, and evaluation of a cooperative navigation system with distributed architecture to be used by multiple UAVs. This system uses proposed structures to store the 3D occupancy grid maps. Furthermore, maps are compressed and transmitted between UAVs using algorithms specially proposed for these purposes. Then the local 3D maps are merged in each UAV. In this map merging system, maps are processed before and merged in pairs using suitable algorithms to make them compatible with the 3D occupancy grid map data. In addition, keypoints orientation properties are obtained from potential field gradients. Some proposed filters are used to improve the parameters of the transformations among maps. To validate the proposed solution, simulations were performed in six different environments, outdoors and indoors, and with different layout characteristics. The obtained results demonstrate the effectiveness of thesystemin the construction, sharing, and merging of maps. Still, from the obtained results, the extreme complexity of map merging systems is highlighted.Os veículos aéreos não tripulados (VANTs) têm sidoamplamenteutilizados em muitas aplicações devido, entre outrosrecursos,à sua versatilidade, custo de operação e tamanho reduzidos. Essas aplicações exigem cadavez mais que recursos relacionados à navegaçãoautônoma sejam empregados,como o mapeamento. No entanto, acapacidade reduzida de recursos como, por exemplo, bateria e hardware (memória e capacidade de processamento) podem atrapalhar o desenvolvimento dessas aplicações em VANTs.Assim, o uso colaborativo de múltiplosVANTs para mapeamento pode ser utilizado como uma alternativa para resolvereste problema, criando um sistema de navegaçãocooperativo. Estesistema requer que mapas locais individuais sejam transmitidos efundidos em um mapa global de forma distribuída.Nesse cenário, há doisproblemas principais aserem abordados:a transmissão dosmapas entre os VANTs e afusão dos mapas locais em cada VANT. Nestecontexto, estatese apresentao projeto, desenvolvimento e avaliaçãode um sistema de navegação cooperativo com arquitetura distribuída para ser utilizado pormúltiplos VANTs. Este sistemausa estruturas propostas para armazenaros mapasdegradedeocupação 3D. Além disso, os mapas são compactados e transmitidos entre os VANTs usando os algoritmos propostos. Em seguida, os mapas 3D locais são fundidos em cada VANT. Neste sistemade fusão de mapas, os mapas são processados antes e juntados em pares usando algunsalgoritmos adequados para torná-los compatíveiscom os dados dos mapas da grade de ocupação 3D. Além disso, as propriedadesde orientação dos pontoschave são obtidas a partir de gradientes de campos potenciais. Alguns filtros propostos são utilizadospara melhorar as indicações dos parâmetros dastransformações entre mapas. Paravalidar a aplicação proposta, foram realizadas simulações em seis ambientes distintos, externos e internos, e com características construtivas distintas. Os resultados apresentados demonstram a efetividade do sistema na construção, compartilhamento e fusão dos mapas. Ainda, a partir dos resultados obtidos, destaca-se a extrema complexidade dos sistemas de fusão de mapas

    Data Flow ORB-SLAM for Real-time Performance on Embedded GPU Boards

    Get PDF
    The use of embedded boards on robots, including unmanned aerial and ground vehicles, is increasing thanks to the availability of GPU equipped low-cost embedded boards in the market. Porting algorithms originally designed for desktop CPUs on those boards is not straightforward due to hardware limitations. In this paper, we present how we modified and customized the open source SLAM algorithm ORB-SLAM2 to run in real-time on the NVIDIA Jetson TX2. We adopted a data flow paradigm to process the images, obtaining an efficient CPU/GPU load distribution that results in a processing speed of about 30 frames per second. Quantitative experimental results on four different sequences of the KITTI datasets demonstrate the effectiveness of the proposed approach. The source code of our data flow ORB-SLAM2 algorithm is publicly available on GitHub

    O2S: Open-source open shuttle

    Full text link
    Currently, commercially available intelligent transport robots that are capable of carrying up to 90kg of load can cost \5000orevenmore.Thismakesrealworldexperimentationprohibitivelyexpensiveandlimitstheapplicabilityofsuchsystemstoeverydayhomeorindustrialtasks.Asidefromtheirhighcost,themajorityofcommerciallyavailableplatformsareeitherclosedsource,platformspecificorusedifficulttocustomizehardwareandfirmware.Inthiswork,wepresentalowcost,opensourceandmodularalternative,referredtohereinas"opensourceopenshuttle(O2S)".O2Sutilizesofftheshelf(OTS)components,additivemanufacturingtechnologies,aluminiumprofiles,andaconsumerhoverboardwithhightorquebrushlessdirectcurrent(BLDC)motors.O2Sisfullycompatiblewiththerobotoperatingsystem(ROS),hasamaximumpayloadof90kg,andcostslessthan5000 or even more. This makes real-world experimentation prohibitively expensive and limits the applicability of such systems to everyday home or industrial tasks. Aside from their high cost, the majority of commercially available platforms are either closed-source, platform-specific or use difficult-to-customize hardware and firmware. In this work, we present a low-cost, open-source and modular alternative, referred to herein as "open-source open shuttle (O2S)". O2S utilizes off-the-shelf (OTS) components, additive manufacturing technologies, aluminium profiles, and a consumer hoverboard with high-torque brushless direct current (BLDC) motors. O2S is fully compatible with the robot operating system (ROS), has a maximum payload of 90kg, and costs less than 1500. Furthermore, O2S offers a simple yet robust framework for contextualizing simultaneous localization and mapping (SLAM) algorithms, an essential prerequisite for autonomous robot navigation. The robustness and performance of the O2S were validated through real-world and simulation experiments. All the design, construction and software files are freely available online under the GNU GPL v3 license at https://doi.org/10.17605/OSF.IO/K83X7. A descriptive video of O2S can be found at https://osf.io/v8tq2

    waveSLAM: Empowering accurate indoor mapping using off-the-shelf millimeter-wave self-sensing

    Get PDF
    Proceedings of: 2023 IEEE 98th Vehicular Technology Conference: VTC2023-Fall, 10-13 October 2023, Hong Kong.This paper presents the design, implementation and evaluation of waveSLAM, a low-cost mobile robot system that uses the millimetre wave (mmWave) communication devices to enhance the indoor mapping process targeting environments with reduced visibility or glass/mirror walls. A unique feature of waveSLAM is that it only leverages existing Commercial-Off-The-Shelf (COTS) hardware (Lidar and mmWave radios) that are mounted on mobile robots to improve the accurate indoor mapping achieved with optical sensors. The key intuition behind the waveSLAM design is that while the mobile robots moves freely, the mmWave radios can periodically exchange angle and distance estimates between themselves (self-sensing) by bouncing the signal from the environment, thus enabling accurate estimates of the target object/material surface. Our experiments verify that waveSLAM can archive cm-level accuracy with errors below 22 cm and 20◦ in angle orientation which is compatible with Lidar when building indoor maps.This work has been partially funded by the European Union's Horizon Europe research and innovation program under grant agreement No 101095759 (Hexa-X-II) and the Spanish Ministry of Economic Affairs and Digital Transformation and the European Union-Next Generation EU through the UNICO 5G I+D 6G-EDGEDT

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects
    corecore