18 research outputs found

    Assistive Navigation Using Deep Reinforcement Learning Guiding Robot With UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People

    Get PDF
    Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed

    A Review on IoT Deep Learning UAV Systems for Autonomous Obstacle Detection and Collision Avoidance

    Get PDF
    [Abstract] Advances in Unmanned Aerial Vehicles (UAVs), also known as drones, offer unprecedented opportunities to boost a wide array of large-scale Internet of Things (IoT) applications. Nevertheless, UAV platforms still face important limitations mainly related to autonomy and weight that impact their remote sensing capabilities when capturing and processing the data required for developing autonomous and robust real-time obstacle detection and avoidance systems. In this regard, Deep Learning (DL) techniques have arisen as a promising alternative for improving real-time obstacle detection and collision avoidance for highly autonomous UAVs. This article reviews the most recent developments on DL Unmanned Aerial Systems (UASs) and provides a detailed explanation on the main DL techniques. Moreover, the latest DL-UAV communication architectures are studied and their most common hardware is analyzed. Furthermore, this article enumerates the most relevant open challenges for current DL-UAV solutions, thus allowing future researchers to define a roadmap for devising the new generation affordable autonomous DL-UAV IoT solutions.Xunta de Galicia; ED431C 2016-045Xunta de Galicia; ED431C 2016-047Xunta de Galicia; , ED431G/01Centro Singular de Investigación de Galicia; PC18/01Agencia Estatal de Investigación de España; TEC2016-75067-C4-1-

    An indoor positioning system based on ultra wideband measurements for planar cable-driven robots localization during payload analysis

    Get PDF
    TCC(graduação) - Universidade Federal de Santa Catarina. Centro Tecnológico. Engenharia Eletrônica.Este trabalho aborda o desenvolvimento de um sistema completo de posicionamento indoor para análise de cargas úteis em robôs planares guiados por cabo. Inicialmente, um sistema de posicionamento é de nido comparando-se tecnologias utilizadas nesta área. Assim, um sistema ultra wideband é proposto, no qual o kit de desenvolvimento da empresa Pozyx é utilizado, e modi cado para ser aplicado na estrutura do robô. Portanto, uma análise dos dois algoritimos, TRACKING e UWB ONLY, que estão disponíveis pelo fabricante para o cálculo do posicionamento é feita. Em seguida uma combinação deles é proposta para ser utilizada nesta aplicação. Logo, a performance dos algoritimos é avaliada em testes realizados em um laboratório, de 4x4 metros de área, sendo que o algoritimo proposto foi escolhido para ser utilizado na versão nal. Este algoritimo apresentou melhores resultados do que os demais, com a maior probabilidade de obter erros abaixo de 6 cm, além de atingir um erro médio menor do que 5 cm nos pontos medidos. Finalmente, uma placa de circuito impreso e uma estrutura mec^anica são desenvolvidas para que o sistema esteja completo para ser aplicado no robô

    MODELING OF INNOVATIVE LIGHTER-THAN-AIR UAV FOR LOGISTICS, SURVEILLANCE AND RESCUE OPERATIONS

    Get PDF
    An unmanned aerial vehicle (UAV) is an aircraft that can operate without the presence of pilots, either through remote control or automated systems. The first part of the dissertation provides an overview of the various types of UAVs and their design features. The second section delves into specific experiences using UAVs as part of an automated monitoring system to identify potential problems such as pipeline leaks or equipment damage by conducting airborne surveys.Lighter-than-air UAVs, such as airships, can be used for various applications, from aerial photography, including surveying terrain, monitoring an area for security purposes and gathering information about weather patterns to surveillance. The third part reveals the applications of UAVs for assisting in search and rescue operations in disaster situations and transporting natural gas. Using PowerSim software, a model of airship behaviour was created to analyze the sprint-and-drift concept and study methods of increasing the operational time of airships while having a lower environmental impact when compared to a constantly switched-on engine. The analysis provided a reliable percentage of finding the victim during patrolling operations, although it did not account for victim behaviour. The study has also shown that airships may serve as a viable alternative to pipeline transportation for natural gas. The technology has the potential to revolutionize natural gas transportation, optimizing efficiency and reducing environmental impact. Additionally, airships have a unique advantage in accessing remote and otherwise inaccessible areas, providing significant benefits in the energy sector. The employment of this technology was studied to be effective in specific scenarios, and it will be worth continuing to study it for a positive impact on society and the environment

    Clock and Power-Induced Bias Correction for UWB Time-of-Flight Measurements

    Get PDF
    Ultra-Wide Band (UWB) communication systems can be used to design low cost, power efficient and precise navigation systems for mobile robots, by measuring the Time of Flight (ToF) of messages traveling between on-board UWB transceivers to infer their locations. Theoretically, decimeter level positioning accuracy or better should be achievable, at least in benign propagation environments where Line-of-Sight (LoS) between the transceivers can be maintained. Yet, in practice, even in such favorable conditions, one often observes significant systematic errors (bias) in the ToF measurements, depending for example on the hardware configuration and relative poses between robots. This letter proposes a ToF error model that includes a standard transceiver clock offset term and an additional term that varies with the received signal power (RxP). We show experimentally that, after fine correction of the clock offset term using clock skew measurements available on modern UWB hardware, much of the remaining pose dependent error in LoS measurements can be captured by the (appropriately defined) RxP-dependent term. This leads us to propose a simple bias compensation scheme that only requires on-board measurements (clock skew and RxP) to remove most of the observed bias in LoS ToF measurements and reliably achieve cm-level ranging accuracy. Because the calibrated ToF bias model does not depend on any extrinsic information such as receiver distances or poses, it can be applied before any additional error correction scheme that requires more information about the robots and their environment

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    An open-source autopilot and bio-inspired source localisation strategies for miniature blimps

    Full text link
    An Uncrewed Aerial Vehicle (UAV) is an airborne vehicle that has no people onboard and thus is either controlled remotely via radio signals or by autonomous capability. This thesis highlights the feasibility of using a bio-inspired miniature lighter than air UAV for indoor applications. While multicopters are the most used type of UAV, the smaller multicopter UAVs used in indoor applications have short flight times and are fragile making them vulnerable to collisions. For tasks such as gas source localisation where the agent would be deployed to detect a gas plume, the amount of air disturbance they create is a disadvantage. Miniature blimps are another type of UAV that are more suited to indoor applications due to their significantly higher collision tolerance. This thesis focuses on the development of a bio-inspired miniature blimp, called FishBlimp. A blimp generally creates significantly less disturbance to the airflow as it doesn’t have to support its own weight. This also usually enables much longer flight times. Using fins instead of propellers for propulsion further reduces the air disturbance as the air velocity is lower. FishBlimp has four fins attached in different orientations along the perimeter of a helium filled spherical envelope to enable it to move along the cardinal axes and yaw. Support for this new vehicle-type was added to the open-source flight control firmware called ArduPilot. Manual control and autonomous functions were developed for this platform to enable position hold and velocity control mode, implemented using a cascaded PID controller. Flight tests revealed that FishBlimp displayed position control with maximum overshoot of about 0.28m and has a maximum flight speed of 0.3m/s. FishBlimp was then applied to source localisation, firstly as a single agent seeking to identify a plume source using a modified Cast & Surge algorithm. FishBlimp was also developed in simulation to perform source localisation with multiple blimps, using a Particle Swarm Optimisation (PSO) algorithm. This enabled them to work cooperatively in order to reduce the time taken for them to find the source. This shows the potential of a platform like FishBlimp to carry out successful indoor source localisation missions

    NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version

    Collaborative Multi-Robot Search and Rescue: Planning, Coordination, Perception, and Active Vision

    Get PDF
    Search and rescue (SAR) operations can take significant advantage from supporting autonomous or teleoperated robots and multi-robot systems. These can aid in mapping and situational assessment, monitoring and surveillance, establishing communication networks, or searching for victims. This paper provides a review of multi-robot systems supporting SAR operations, with system-level considerations and focusing on the algorithmic perspectives for multi-robot coordination and perception. This is, to the best of our knowledge, the first survey paper to cover (i) heterogeneous SAR robots in different environments, (ii) active perception in multi-robot systems, while (iii) giving two complementary points of view from the multi-agent perception and control perspectives. We also discuss the most significant open research questions: shared autonomy, sim-to-real transferability of existing methods, awareness of victims' conditions, coordination and interoperability in heterogeneous multi-robot systems, and active perception. The different topics in the survey are put in the context of the different challenges and constraints that various types of robots (ground, aerial, surface, or underwater) encounter in different SAR environments (maritime, urban, wilderness, or other post-disaster scenarios). The objective of this survey is to serve as an entry point to the various aspects of multi-robot SAR systems to researchers in both the machine learning and control fields by giving a global overview of the main approaches being taken in the SAR robotics area
    corecore