4,317 research outputs found
Planning Hybrid Driving-Stepping Locomotion on Multiple Levels of Abstraction
Navigating in search and rescue environments is challenging, since a variety
of terrains has to be considered. Hybrid driving-stepping locomotion, as
provided by our robot Momaro, is a promising approach. Similar to other
locomotion methods, it incorporates many degrees of freedom---offering high
flexibility but making planning computationally expensive for larger
environments.
We propose a navigation planning method, which unifies different levels of
representation in a single planner. In the vicinity of the robot, it provides
plans with a fine resolution and a high robot state dimensionality. With
increasing distance from the robot, plans become coarser and the robot state
dimensionality decreases. We compensate this loss of information by enriching
coarser representations with additional semantics. Experiments show that the
proposed planner provides plans for large, challenging scenarios in feasible
time.Comment: In Proceedings of IEEE International Conference on Robotics and
Automation (ICRA), Brisbane, Australia, May 201
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether
This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle
(UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a
flying sensor but also as a tether attachment device. Two robots are connected
with a tether, allowing the UAV to anchor the tether to a structure located at
the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the
poor traversability of the UGV by not only providing a wider range of scanning
and mapping from the air, but also by allowing the UGV to climb steep terrains
with the winding of the tether. In addition, we present an autonomous framework
for the collaborative navigation and tether attachment in an unknown
environment. The UAV employs visual inertial navigation with 3D voxel mapping
and obstacle avoidance planning. The UGV makes use of the voxel map and
generates an elevation map to execute path planning based on a traversability
analysis. Furthermore, we compared the pros and cons of possible methods for
the tether anchoring from multiple points of view. To increase the probability
of successful anchoring, we evaluated the anchoring strategy with an
experiment. Finally, the feasibility and capability of our proposed system were
demonstrated by an autonomous mission experiment in the field with an obstacle
and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on
Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1
Command and Control Systems for Search and Rescue Robots
The novel application of unmanned systems in the domain of humanitarian Search and Rescue (SAR) operations has created a need to develop specific multi-Robot Command and Control (RC2) systems. This societal application of robotics requires human-robot interfaces for controlling a large fleet of heterogeneous robots deployed in multiple domains of operation (ground, aerial and marine). This chapter provides an overview of the Command, Control and Intelligence (C2I) system developed within the scope of Integrated Components for Assisted Rescue and Unmanned Search operations (ICARUS). The life cycle of the system begins with a description of use cases and the deployment scenarios in collaboration with SAR teams as end-users. This is followed by an illustration of the system design and architecture, core technologies used in implementing the C2I, iterative integration phases with field deployments for evaluating and improving the system. The main subcomponents consist of a central Mission Planning and Coordination System (MPCS), field Robot Command and Control (RC2) subsystems with a portable force-feedback exoskeleton interface for robot arm tele-manipulation and field mobile devices. The distribution of these C2I subsystems with their communication links for unmanned SAR operations is described in detail. Field demonstrations of the C2I system with SAR personnel assisted by unmanned systems provide an outlook for implementing such systems into mainstream SAR operations in the future
- …