11 research outputs found

    Percepción Activa multi-robot para la cobertura rápida y eficiente de escenas.

    Get PDF
    The efficiency of path-planning in robot navigation is crucial in tasks, such as search-and-rescue and disaster surveying, but this is emphasised even more when considering multi-rotor aerial robots due to the limited battery and flight time. In this spirit, this Master Thesis proposes an efficient, hierarchical planner to achieve a comprehensive visual coverage of large-scale outdoor scenarios for small drones. Following an initial reconnaissance flight, a coarse map of the scene gets built in real-time. Then, regions of the map that were not appropriately observed are identified and grouped by a novel perception-aware clustering process that enables the generation of continuous trajectories (sweeps) to cover them efficiently. Thanks to this partitioning of the map in a set of tasks, we are able to generalize the planning to an arbitrary number of drones and perform a well-balanced workload distribution among them. We compare our approach to an alternative state-of-the-art method for exploration and show the advantages of our pipeline in terms of efficiency for obtaining coverage in large environments.<br /

    Multi-robot active perception for fast andefficient scene reconstruction.

    Get PDF
    The efficiency of path-planning in robot navigation is crucial in tasks, such as search-and-rescue and disaster surveying, but this is emphasised even more when considering multi-rotor aerial robots due to the limited battery and flight time. In this spirit, this Master Thesis proposes an efficient, hierarchical planner to achieve a comprehensive visual coverage of large-scale outdoor scenarios for small drones. Following an initial reconnaissance flight, a coarse map of the scene gets built in real-time. Then, regions of the map that were not appropriately observed are identified and grouped by a novel perception-aware clustering process that enables the generation of continuous trajectories (sweeps) to cover them efficiently. Thanks to this partitioning of the map in a set of tasks, we are able to generalize the planning to an arbitrary number of drones and perform a well-balanced workload distribution among them. We compare our approach to an alternative state-of-the-art method for exploration and show the advantages of our pipeline in terms of efficiency for obtaining coverage in large environments.<br /

    Perceptual Factors for Environmental Modeling in Robotic Active Perception

    Full text link
    Accurately assessing the potential value of new sensor observations is a critical aspect of planning for active perception. This task is particularly challenging when reasoning about high-level scene understanding using measurements from vision-based neural networks. Due to appearance-based reasoning, the measurements are susceptible to several environmental effects such as the presence of occluders, variations in lighting conditions, and redundancy of information due to similarity in appearance between nearby viewpoints. To address this, we propose a new active perception framework incorporating an arbitrary number of perceptual effects in planning and fusion. Our method models the correlation with the environment by a set of general functions termed perceptual factors to construct a perceptual map, which quantifies the aggregated influence of the environment on candidate viewpoints. This information is seamlessly incorporated into the planning and fusion processes by adjusting the uncertainty associated with measurements to weigh their contributions. We evaluate our perceptual maps in a simulated environment that reproduces environmental conditions common in robotics applications. Our results show that, by accounting for environmental effects within our perceptual maps, we improve in the state estimation by correctly selecting the viewpoints and considering the measurement noise correctly when affected by environmental factors. We furthermore deploy our approach on a ground robot to showcase its applicability for real-world active perception missions.Comment: 7 pages, 9 figures, under review for IEEE ICRA 202

    Robust Fusion for Bayesian Semantic Mapping

    Full text link
    The integration of semantic information in a map allows robots to understand better their environment and make high-level decisions. In the last few years, neural networks have shown enormous progress in their perception capabilities. However, when fusing multiple observations from a neural network in a semantic map, its inherent overconfidence with unknown data gives too much weight to the outliers and decreases the robustness of the resulting map. In this work, we propose a novel robust fusion method to combine multiple Bayesian semantic predictions. Our method uses the uncertainty estimation provided by a Bayesian neural network to calibrate the way in which the measurements are fused. This is done by regularizing the observations to mitigate the problem of overconfident outlier predictions and using the epistemic uncertainty to weigh their influence in the fusion, resulting in a different formulation of the probability distributions. We validate our robust fusion strategy by performing experiments on photo-realistic simulated environments and real scenes. In both cases, we use a network trained on different data to expose the model to varying data distributions. The results show that considering the model's uncertainty and regularizing the probability distribution of the observations distribution results in a better semantic segmentation performance and more robustness to outliers, compared with other methods.Comment: 7 pages, 7 figures, under review at IEEE IROS 202

    Clonal chromosomal mosaicism and loss of chromosome Y in elderly men increase vulnerability for SARS-CoV-2

    Full text link
    The pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2, COVID-19) had an estimated overall case fatality ratio of 1.38% (pre-vaccination), being 53% higher in males and increasing exponentially with age. Among 9578 individuals diagnosed with COVID-19 in the SCOURGE study, we found 133 cases (1.42%) with detectable clonal mosaicism for chromosome alterations (mCA) and 226 males (5.08%) with acquired loss of chromosome Y (LOY). Individuals with clonal mosaic events (mCA and/or LOY) showed a 54% increase in the risk of COVID-19 lethality. LOY is associated with transcriptomic biomarkers of immune dysfunction, pro-coagulation activity and cardiovascular risk. Interferon-induced genes involved in the initial immune response to SARS-CoV-2 are also down-regulated in LOY. Thus, mCA and LOY underlie at least part of the sex-biased severity and mortality of COVID-19 in aging patients. Given its potential therapeutic and prognostic relevance, evaluation of clonal mosaicism should be implemented as biomarker of COVID-19 severity in elderly people. Among 9578 individuals diagnosed with COVID-19 in the SCOURGE study, individuals with clonal mosaic events (clonal mosaicism for chromosome alterations and/or loss of chromosome Y) showed an increased risk of COVID-19 lethality

    Vision Based Control for Industrial Robots : Research and implementation

    No full text
    The automation revolution already helps in many tasks that are now performed by robots.  Increases in the complexity of problems regarding robot manipulators require new approaches or alternatives in order to solve them. This project comprises a research in different available software for implementing easy and fast visual servoing tasks controlling a robot manipulator. It focuses on out-of-the-box solutions. Then, the tools found are applied to implement a solution for controlling an arm from Universal Robots. The task is to follow a moving object on a plane with the robot manipulator. The research compares the most popular software, the state-of-the-art alternatives, especially in computer vision and also robot control. The implementation aims to be a proof of concept of a system divided by each functionality (computer vision, path generation and robot control) in order to allow software modularity and exchangeability. The results show various options for each system to take into consideration. The implementation is successfully completed, showing the efficiency of the alternatives examined. The chosen software is MATLAB and Simulink for computer vision and trajectory calculation interfacing with Robotic Operating System (ROS). ROS is used for controlling a UR3 arm using ros_control and ur_modern_driver packages.  Both the research and the implementation present a first approach for further applications and understanding over the current technologies for visual servoing tasks. These alternatives offer different easy, fast, and flexible methods to confront complex computer vision and robot control problems

    Sweep-Your-Map: Efficient Coverage Planning for Aerial Teams in Large-Scale Environments

    No full text
    The efficiency of path-planning in robot navigation is crucial in tasks such as search-and-rescue and disaster surveying, but this is emphasized even more when considering multi-rotor aerial robots due to the limited battery and flight time. In this spirit, this work proposes an efficient, hierarchical planner to achieve comprehensive visual coverage of large-scale outdoor scenarios for small drones. Following an initial reconnaissance flight, a coarse map of the scene gets built in real-time. Then, regions of the map that were not appropriately observed are identified and grouped by a novel perception-aware clustering process that enables the generation of continuous trajectories (sweeps) to cover them efficiently. Thanks to this partitioning of the map into a set of tasks, we can generalize the planning to an arbitrary number of drones and perform a well-balanced workload distribution among them. We compare our approach against a state-of-the-art method for exploration and show the advantages of our pipeline in terms of efficiency for obtaining coverage in large environments.ISSN:2377-376

    The UMA-SAR Dataset: Multimodal data collection from a ground vehicle during outdoor disaster response training exercises

    No full text
    - [The full description of the dataset can be found at: https://www.uma.es/robotics-and-mechatronics/info/124594/sar-datasets/ ] - Collection of multimodal raw data captured from a manned all-terrain vehicle in the course of two realistic outdoor search and rescue (SAR) exercises for actual emergency responders conducted in Málaga (Spain) in 2018 and 2019: the UMA-SAR dataset. The sensor suite, applicable to unmanned ground vehicles (UGVs), consisted of overlapping visible light (RGB) and thermal infrared (TIR) forward-looking monocular cameras, a Velodyne HDL-32 three-dimensional (3D) lidar, as well as an inertial measurement unit (IMU) and two global positioning system (GPS) receivers as ground truth. Our mission was to collect a wide range of data from the SAR domain, including persons, vehicles, debris, and SAR activity on unstructured terrain. In particular, four data sequences were collected following closed-loop routes during the exercises, with a total path length of 5.2 km and a total time of 77 min. In addition, we provide three more sequences of the empty site for comparison purposes (an extra 4.9 km and 46 min). Furthermore, the data is offered both in human-readable format and as rosbag files, and two specific software tools are provided for extracting and adapting this dataset to the users’ preference. The review of previously published disaster robotics repositories indicates that this dataset can contribute to fill a gap regarding visual and thermal datasets and can serve as a research tool for cross-cutting areas such as multispectral image fusion, machine learning for scene understanding, person and object detection, and localization and mapping in unstructured environments.This work has been performed in the frame of the project “TRUST-ROB: Towards Resilient UGV and UAV Manipulator Teams for Robotic Search and Rescue Tasks,” funded by the Spanish Government (grant number RTI2018-093421-B-I00) and project UMA18-FEDERJA-090 funded by the Andalusian Regional Government (Junta de Andalucía
    corecore