378 research outputs found

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Information-driven 6D SLAM based on ranging vision

    Full text link
    This paper presents a novel solution for building three-dimensional dense maps in unknown and unstructured environment with reduced computational costs. This is achieved by giving the robot the 'intelligence' to select, out of the steadily collected data, the maximally informative observations to be used in the estimation of the robot location and its surroundings. We show that, although the actual evaluation of information gain for each frame introduces an additional computational cost, the overall efficiency is significantly increased by keeping the matrix compact. The noticeable advantage of this strategy is that the continuously gathered data is not heuristically segmented prior to be input to the filter. Quite the opposite, the scheme lends itself to be statistically optimal and is capable of handling large data sets collected at realistic sampling rates. The strategy is generic to any 3D feature-based simultaneous localization and mapping (SLAM) algorithm in the information form, but in the work presented here it is closely coupled to a proposed novel appearance-based sensory package. It consists of a conventional camera and a range imager, which provide range, bearing and elevation inputs to visual salient features as commonly used by three-dimensional point-based SLAM, but it is also particularly well adapted for lightweight mobile platforms such as those commonly employed for Urban Search and Rescue (USAR), chosen here to demonstrate the excellences of the proposed strategy. ©2008 IEEE

    Local Generating Map System Using Rviz ROS and Kinect Camera for Rescue Robot Application

    Get PDF
    This paper presents a model to generate a 3D model of a room, where room mapping is very necessary to find out the existing real conditions, where this modeling will be applied to the rescue robot. To solve this problem, researchers made a breakthrough by creating a 3D room mapping system. The mapping system and 3D model making carried out in this study are to utilize the camera Kinect and Rviz on the ROS. The camera takes a picture of the area around it, the imagery results are processed in the ROS system, the processing carried out includes several nodes and topics in the ROS which later the signal results are sent and displayed on the Rviz ROS. From the results of the tests that have been carried out, the designed system can create a 3D model from the Kinect camera capture by utilizing the Rviz function on the ROS. From this model later every corner of the room can be mapped and modeled in 3

    SegMap: 3D Segment Mapping using Data-Driven Descriptors

    Full text link
    When performing localization and mapping, working at the level of structure can be advantageous in terms of robustness to environmental changes and differences in illumination. This paper presents SegMap: a map representation solution to the localization and mapping problem based on the extraction of segments in 3D point clouds. In addition to facilitating the computationally intensive task of processing 3D point clouds, working at the level of segments addresses the data compression requirements of real-time single- and multi-robot systems. While current methods extract descriptors for the single task of localization, SegMap leverages a data-driven descriptor in order to extract meaningful features that can also be used for reconstructing a dense 3D map of the environment and for extracting semantic information. This is particularly interesting for navigation tasks and for providing visual feedback to end-users such as robot operators, for example in search and rescue scenarios. These capabilities are demonstrated in multiple urban driving and search and rescue experiments. Our method leads to an increase of area under the ROC curve of 28.3% over current state of the art using eigenvalue based features. We also obtain very similar reconstruction capabilities to a model specifically trained for this task. The SegMap implementation will be made available open-source along with easy to run demonstrations at www.github.com/ethz-asl/segmap. A video demonstration is available at https://youtu.be/CMk4w4eRobg

    Microdrone-Based Indoor Mapping with Graph SLAM

    Get PDF
    Unmanned aerial vehicles offer a safe and fast approach to the production of three-dimensional spatial data on the surrounding space. In this article, we present a low-cost SLAM-based drone for creating exploration maps of building interiors. The focus is on emergency response mapping in inaccessible or potentially dangerous places. For this purpose, we used a quadcopter microdrone equipped with six laser rangefinders (1D scanners) and an optical sensor for mapping and positioning. The employed SLAM is designed to map indoor spaces with planar structures through graph optimization. It performs loop-closure detection and correction to recognize previously visited places, and to correct the accumulated drift over time. The proposed methodology was validated for several indoor environments. We investigated the performance of our drone against a multilayer LiDAR-carrying macrodrone, a vision-aided navigation helmet, and ground truth obtained with a terrestrial laser scanner. The experimental results indicate that our SLAM system is capable of creating quality exploration maps of small indoor spaces, and handling the loop-closure problem. The accumulated drift without loop closure was on average 1.1% (0.35 m) over a 31-m-long acquisition trajectory. Moreover, the comparison results demonstrated that our flying microdrone provided a comparable performance to the multilayer LiDAR-based macrodrone, given the low deviation between the point clouds built by both drones. Approximately 85 % of the cloud-to-cloud distances were less than 10 cm

    Micro Aerial Vehicles (MAV) Assured Navigation in Search and Rescue Missions Robust Localization, Mapping and Detection

    Get PDF
    This Master's Thesis describes the developments on robust localization, mapping and detection algorithms for Micro Aerial Vehicles (MAVs). The localization method proposes a seamless indoor-outdoor multi-sensor architecture. This algorithm is capable of using all or a subset of its sensor inputs to determine a platform's position, velocity and attitude (PVA). It relies on the Inertial Measurement Unit as the core sensor and monitors the status and observability of the secondary sensors to select the most optimum estimator strategy for each situation. Furthermore, it ensures a smooth transition between filters structures. This document also describes the integration mechanism for a set of common sensors such as GNSS receivers, laser scanners and stereo and mono cameras. The mapping algorithm provides a fully automated fast aerial mapping pipeline. It speeds up the process by pre-selecting the images using the flight plan and the onboard localization. Furthermore, it relies on Structure from Motion (SfM) techniques to produce an optimized 3D reconstruction of camera locations and sparse scene geometry. These outputs are used to compute the perspective transformations that project the raw images on the ground and produce a geo-referenced map. Finally, these maps are fused with other domains in a collaborative UGV and UAV mapping algorithms. The real-time aerial detection of victims is based on a thermal camera. The algorithm is composed by three steps. Firstly, a normalization of the image is performed to get rid of the background and to extract the regions of interest. Later the victim detection and tracking steps produce the real-time geo-referenced locations of the detections. The thesis also proposes the concept of a MAV Copilot, a payload composed by a set of sensors and algorithm the enhances the capabilities of any commercial MAV. To develop and validate these contributions, a prototype of a search and rescue MAV and the Copilot has been developed. These developments have been validated in three large-scale demonstrations of search and rescue operations in the context of the European project ICARUS: a shipwreck in Lisbon (Portugal), an earthquake in Marche (Belgium), and the Fukushima nuclear disaster in the euRathlon 2015 competition in Piombino (Italy)

    EFFECTIVE NAVIGATION AND MAPPING OF A CLUTTERED ENVIRONMENT USING A MOBILE ROBOT

    Get PDF
    Today, the as-is three-dimensional point cloud acquisition process for understanding scenes of interest, monitoring construction progress, and detecting safety hazards uses a laser scanning system mounted on mobile robots, which enables it faster and more automated, but there is still room for improvement. The main disadvantage of data collection using laser scanners is that point cloud data is only collected in a scanner’s line of sight, so regions in three-dimensional space that are occluded by objects are not observable. To solve this problem and obtain a complete reconstruction of sites without information loss, scans must be taken from multiple viewpoints. This thesis describes how such a solution can be integrated into a fully autonomous mobile robot capable of generating a high-resolution three-dimensional point cloud of a cluttered and unknown environment without a prior map. First, the mobile platform estimates unevenness of terrain and surrounding environment. Second, it finds the occluded region in the currently built map and determines the effective next scan location. Then, it moves to that location by using grid-based path planner and unevenness estimation results. Finally, it performs the high-resolution scanning that area to fill out the point cloud map. This process repeats until the designated scan region filled up with scanned point cloud. The mobile platform also keeps scanning for navigation and obstacle avoidance purposes, calculates its relative location, and builds the surrounding map while moving and scanning, a process known as simultaneous localization and mapping. The proposed approaches and the system were tested and validated in an outdoor construction site and a simulated disaster environment with promising results.Ph.D

    Intuitive 3D Maps for MAV Terrain Exploration and Obstacle Avoidance

    Get PDF
    Recent development showed that Micro Aerial Vehicles (MAVs) are nowadays capable of autonomously take off at one point and land at another using only one single camera as exteroceptive sensor. During the flight and landing phase the MAV and user have, however, little knowledge about the whole terrain and potential obstacles. In this paper we show a new solution for a real-time dense 3D terrain reconstruction. This can be used for efficient unmanned MAV terrain exploration and yields a solid base for standard autonomous obstacle avoidance algorithms and path planners. Our approach is based on a textured 3D mesh on sparse 3D point features of the scene. We use the same feature points to localize and control the vehicle in the 3D space as we do for building the 3D terrain reconstruction mesh. This enables us to reconstruct the terrain without significant additional cost and thus in real-time. Experiments show that the MAV is easily guided through an unknown, GPS denied environment. Obstacles are recognized in the iteratively built 3D terrain reconstruction and are thus well avoide
    • …
    corecore