4,218 research outputs found

    S-AVE Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots

    Get PDF
    Semantic mapping is fundamental to enable cognition and high-level planning in robotics. It is a difficult task due to generalization to different scenarios and sensory data types. Hence, most techniques do not obtain a rich and accurate semantic map of the environment and of the objects therein. To tackle this issue we present a novel approach that exploits active vision and drives environment exploration aiming at improving the quality of the semantic map

    Efficient exploration of unknown indoor environments using a team of mobile robots

    Get PDF
    Whenever multiple robots have to solve a common task, they need to coordinate their actions to carry out the task efficiently and to avoid interferences between individual robots. This is especially the case when considering the problem of exploring an unknown environment with a team of mobile robots. To achieve efficient terrain coverage with the sensors of the robots, one first needs to identify unknown areas in the environment. Second, one has to assign target locations to the individual robots so that they gather new and relevant information about the environment with their sensors. This assignment should lead to a distribution of the robots over the environment in a way that they avoid redundant work and do not interfere with each other by, for example, blocking their paths. In this paper, we address the problem of efficiently coordinating a large team of mobile robots. To better distribute the robots over the environment and to avoid redundant work, we take into account the type of place a potential target is located in (e.g., a corridor or a room). This knowledge allows us to improve the distribution of robots over the environment compared to approaches lacking this capability. To autonomously determine the type of a place, we apply a classifier learned using the AdaBoost algorithm. The resulting classifier takes laser range data as input and is able to classify the current location with high accuracy. We additionally use a hidden Markov model to consider the spatial dependencies between nearby locations. Our approach to incorporate the information about the type of places in the assignment process has been implemented and tested in different environments. The experiments illustrate that our system effectively distributes the robots over the environment and allows them to accomplish their mission faster compared to approaches that ignore the place labels

    Directed Exploration using a Modified Distance Transform

    Get PDF
    Mobile robots operating in unknown environments need to build maps. To do so they must have an exploration algorithm to plan a path. This algorithm should guarantee that the whole of the environment, or at least some designated area, will be mapped. The path should also be optimal in some sense and not simply a "random walk" which is clearly inefficient. When multiple robots are involved, the algorithm also needs to take advantage of the fact that the robots can share the task. In this paper we discuss a modification to the well-known distance transform that satisfies these requirements

    From Monocular SLAM to Autonomous Drone Exploration

    Full text link
    Micro aerial vehicles (MAVs) are strongly limited in their payload and power capacity. In order to implement autonomous navigation, algorithms are therefore desirable that use sensory equipment that is as small, low-weight, and low-power consuming as possible. In this paper, we propose a method for autonomous MAV navigation and exploration using a low-cost consumer-grade quadrocopter equipped with a monocular camera. Our vision-based navigation system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense reconstruction of the environment in real-time. Since LSD-SLAM only determines depth at high gradient pixels, texture-less areas are not directly observed so that previous exploration methods that assume dense map information cannot directly be applied. We propose an obstacle mapping and exploration approach that takes the properties of our semi-dense monocular SLAM system into account. In experiments, we demonstrate our vision-based autonomous navigation and exploration system with a Parrot Bebop MAV
    corecore