10 research outputs found

    Multi-Sensor Fusion for the Mason Framework

    No full text
    Nowadays, mobile robots are equipped with combinations of complementary sensor systems enabling the robot to perceive its surrounding environment. These sensor systems can for example be stereo vision systems, RGB-D Cameras and 3D or spinning 2D laser scanners, each providing different capabilities for environment sensing. A sufficient world model estimation is crucial for the robot’s ability to perform complex tasks such as footstep planning, collision avoidance, path planning or manipulation. In this thesis, the sensor fusion capability of the new sensor fusion framework Mason is developed. Mason is designed to be deployed on multi-sensor systems and is capable to fuse measurements from an arbitrary number of sensors in order to provide accurate and dense world models. In order to gain flexibility, the framework supports loading shared libraries during runtime to add functionality to the framework dynamically. For sensor fusion, the spatially hashed truncated signed distance function (TSDF) was chosen, as it only stores curvatures of the environment and therefore reduces computational and memory consumption. The presented work is based on the OpenCHISEL library that was improved and integrated into Mason. The thesis investigates how to combine multiple local TSDF estimations from different sensors to a global TSDF representation. Afterwards, we demonstrate how different world model representations are created based on the TSDF data, for example an elevation map, tested in simulation with the multi-sensor head of the THOR-MANG humanoid robot

    Autonomous Assistance for Versatile Grasping with Rescue Robots

    No full text
    The deployment of mobile robots in urban search and rescue (USAR) scenarios often requires manipulation abilities, for example, for clearing debris or opening a door. Conventional teleoperated control of mobile manipulator arms with a high number of degrees of freedom in unknown and unstructured environments is highly challenging and error-prone. Thus, flexible semi-autonomous manipulation capabilities promise valuable support to the operator and possibly also prevent failures during missions. However, most existing approaches are not flexible enough as, e.g., they either assume a-priori known objects or object classes or require manual selection of grasp poses. In this paper, an approach is presented that combines a segmented 3D model of the scene with grasp pose detection. It enables grasping arbitrary rigid objects based on a geometric segmentation approach that divides the scene into objects. Antipodal grasp candidates sampled by the grasp pose detection are ranked to ensure a robust grasp. The human remotely operating the robot is able to control the grasping process using two short interactions in the user interface. Our real robot experiments demonstrate the capability to grasp various objects in cluttered environments

    Pose Prediction for Mobile Ground Robots Evaluation Dataset

    No full text
    This dataset provides ground truth robot trajectories in rough terrain for the evaluation of pose prediction approaches for mobile ground robots. It is composed of six datasets in four different scenarios of the RoboCup Rescue Robot League (RRL): * Continuous Ramps: Series of double ramps * Curb: Three 10 x 10 cm bars on flat ground * Hurdles: Steps of varying heights * Elevated Ramps: Boxes of varying heights with sloped tops Four datasets were created in the Gazebo simulator and two were recorded on a real robot platform in the DRZ Living Lab. Each dataset contains the ground truth robot poses of a path through the arena. In Gazebo, the ground truth poses are provided by the simulator. In the DRZ Living Lab, a high-performance Qualisys optical motion capture system has been used. The data has been recorded using the tracked robot "Asterix". It is a highly mobile platform with main tracks and coupled flippers on the front and back and a chassis footprint of 72 × 52 cm. The data is provided as Bagfiles for ROS and is intended to be used with the package hector_pose_prediction_benchmark. This dataset is published as part of the publication: Oehler, Martin, et al. "Accurate Pose Prediction on Signed Distance Fields for Mobile Ground Robots in Rough Terrain." 2023 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2023. See the provided README for further information

    HectorGrapher: Continuous-time Lidar SLAM with Multi-resolution Signed Distance Function Registration for Challenging Terrain

    No full text
    For deployment in previously unknown, unstructured, and GPS-denied environments, autonomous mobile rescue robots need to localize themselves in such environments and create a map of it using a simultaneous localization and mapping (SLAM) approach. Continuous-time SLAM approaches represent the pose as a time-continuous estimate that provides high accuracy and allows correcting for distortions induced by motion during the scan capture. To enable robust and accurate real-time SLAM in challenging terrain, we propose HectorGrapher which enables accurate localization by continuous-time pose estimation and robust scan registration based on multi-resolution signed distance functions. We evaluate the method in multiple publicly available real-world datasets, as well as a data set from the RoboCup 2021 Rescue League, where we applied the proposed method to win the Best-in-Class "Exploration and Mapping" Award

    Robust Multisensor Fusion for Reliable Mapping and Navigation in Degraded Visual Conditions

    No full text
    We address the problem of robust simultaneous mapping and localization in degraded visual conditions using low-cost off-the-shelf radars. Current methods often use high- end radar sensors or are tightly coupled to specific sensors, limiting the applicability to new robots. In contrast, we present a sensor-agnostic processing pipeline based on a novel forward sensor model to achieve accurate updates of signed distance function-based maps and robust optimization techniques to reach robust and accurate pose estimates. Our evaluation demonstrates accurate mapping and pose estimation in indoor environments under poor visual conditions and higher accuracy compared to existing methods on publicly available benchmark data

    Entwicklung eines autonomiefokussierten hochmobilen Bodenrobotersystems für den Katastrophenschutz

    No full text
    Mobile Rettungsroboter ermöglichen den menschlichen Bedienern die Bearbeitung von Aufgaben aus sicherer Entfernung in risikoreichen Umgebungen. Durch die unstrukturierte Umgebung der komplexen und vorab unbekannten Einsatzszenarien, verursacht die aktuell übliche Teleoperation der Robotersysteme eine hohe kognitive Belastung für den Roboteroperator, was schnell zur Ermüdung führt. Durch intelligente autonome Assistenzfunktionen können die Operatoren entlastet werden, wodurch die Wahrscheinlichkeit von Bedienfehlern reduziert und die Effizienz des Robotereinsatzes erhöht werden kann. Diese innovativen Assistenzfunktionen benötigen jedoch ein mechatronisches Design, dessen Anforderungen an Hard- und Software für ein effektives Gesamtsystem eng aufeinander abgestimmt und umgesetzt werden müssen. Die Entwicklung eines hochmobilen autonomiefokussierten Bodenroboters mit modularen Sensornutzlasten ermöglicht dem Operator ein umfassendes Situationsbewusstsein sowie Unterstützung bei Navigation und Manipulation. Die Evaluation des Gesamtsystems und von Einzelkomponenten analysiert die Erfüllung des Anforderungskatalogs und demonstriert so die Eignung für (semi-)autonome Rettungsrobotikeinsätze

    IoT-Based Activity Recognition for Process Assistance in Human-Robot Disaster Response

    No full text
    Mobile robots like drones or ground vehicles can be a valuable addition to emergency response teams, because they reduce the risk and the burden for human team members. However, the need to manage and coordinate human-robot team operations during ongoing missions adds an additional dimension to an already complex and stressful situation. BPM approaches can help to visualize and document the disaster response processes underlying a mission. In this paper, we show how data from a ground robot's reconnaissance run can be used to provide process assistance to the officers. By automatically recognizing executed activities and structuring them as an ad-hoc process instance, we are able to document the executed process and provide real-time information about the mission status. The resulting mission progress process model can be used for additional services, such as officer training or mission documentation. Our approach is implemented as a prototype and demonstrated using data from an ongoing research project on rescue robotics
    corecore