4,435 research outputs found

    SkiMap: An Efficient Mapping Framework for Robot Navigation

    Full text link
    We present a novel mapping framework for robot navigation which features a multi-level querying system capable to obtain rapidly representations as diverse as a 3D voxel grid, a 2.5D height map and a 2D occupancy grid. These are inherently embedded into a memory and time efficient core data structure organized as a Tree of SkipLists. Compared to the well-known Octree representation, our approach exhibits a better time efficiency, thanks to its simple and highly parallelizable computational structure, and a similar memory footprint when mapping large workspaces. Peculiarly within the realm of mapping for robot navigation, our framework supports realtime erosion and re-integration of measurements upon reception of optimized poses from the sensor tracker, so as to improve continuously the accuracy of the map.Comment: Accepted by International Conference on Robotics and Automation (ICRA) 2017. This is the submitted version. The final published version may be slightly differen

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

    Fast, Autonomous Flight in GPS-Denied and Cluttered Environments

    Full text link
    One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field Robotic

    UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether

    Full text link
    This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a flying sensor but also as a tether attachment device. Two robots are connected with a tether, allowing the UAV to anchor the tether to a structure located at the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the poor traversability of the UGV by not only providing a wider range of scanning and mapping from the air, but also by allowing the UGV to climb steep terrains with the winding of the tether. In addition, we present an autonomous framework for the collaborative navigation and tether attachment in an unknown environment. The UAV employs visual inertial navigation with 3D voxel mapping and obstacle avoidance planning. The UGV makes use of the voxel map and generates an elevation map to execute path planning based on a traversability analysis. Furthermore, we compared the pros and cons of possible methods for the tether anchoring from multiple points of view. To increase the probability of successful anchoring, we evaluated the anchoring strategy with an experiment. Finally, the feasibility and capability of our proposed system were demonstrated by an autonomous mission experiment in the field with an obstacle and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1
    • ā€¦
    corecore