1,436 research outputs found

    Efficient exploration of unknown indoor environments using a team of mobile robots

    Get PDF
    Whenever multiple robots have to solve a common task, they need to coordinate their actions to carry out the task efficiently and to avoid interferences between individual robots. This is especially the case when considering the problem of exploring an unknown environment with a team of mobile robots. To achieve efficient terrain coverage with the sensors of the robots, one first needs to identify unknown areas in the environment. Second, one has to assign target locations to the individual robots so that they gather new and relevant information about the environment with their sensors. This assignment should lead to a distribution of the robots over the environment in a way that they avoid redundant work and do not interfere with each other by, for example, blocking their paths. In this paper, we address the problem of efficiently coordinating a large team of mobile robots. To better distribute the robots over the environment and to avoid redundant work, we take into account the type of place a potential target is located in (e.g., a corridor or a room). This knowledge allows us to improve the distribution of robots over the environment compared to approaches lacking this capability. To autonomously determine the type of a place, we apply a classifier learned using the AdaBoost algorithm. The resulting classifier takes laser range data as input and is able to classify the current location with high accuracy. We additionally use a hidden Markov model to consider the spatial dependencies between nearby locations. Our approach to incorporate the information about the type of places in the assignment process has been implemented and tested in different environments. The experiments illustrate that our system effectively distributes the robots over the environment and allows them to accomplish their mission faster compared to approaches that ignore the place labels

    Categorization of indoor places by combining local binary pattern histograms of range and reflectance data from laser range finders

    Get PDF
    This paper presents an approach to categorize typical places in indoor environments using 3D scans provided by a laser range finder. Examples of such places are offices, laboratories, or kitchens. In our method, we combine the range and reflectance data from the laser scan for the final categorization of places. Range and reflectance images are transformed into histograms of local binary patterns and combined into a single feature vector. This vector is later classified using support vector machines. The results of the presented experiments demonstrate the capability of our technique to categorize indoor places with high accuracy. We also show that the combination of range and reflectance information improves the final categorization results in comparison with a single modality

    Efficiently learning metric and topological maps with autonomous service robots

    Get PDF
    Models of the environment are needed for a wide range of robotic applications, from search and rescue to automated vacuum cleaning. Learning maps has therefore been a major research focus in the robotics community over the last decades. In general, one distinguishes between metric and topological maps. Metric maps model the environment based on grids or geometric representations whereas topological maps model the structure of the environment using a graph. The contribution of this paper is an approach that learns a metric as well as a topological map based on laser range data obtained with a mobile robot. Our approach consists of two steps. First, the robot solves the simultaneous localization and mapping problem using an efficient probabilistic filtering technique. In a second step, it acquires semantic information about the environment using machine learning techniques. This semantic information allows the robot to distinguish between different types of places like, e. g., corridors or rooms. This enables the robot to construct annotated metric as well as topological maps of the environment. All techniques have been implemented and thoroughly tested using real mobile robot in a variety of environments

    Knowledge Representation for Robots through Human-Robot Interaction

    Full text link
    The representation of the knowledge needed by a robot to perform complex tasks is restricted by the limitations of perception. One possible way of overcoming this situation and designing "knowledgeable" robots is to rely on the interaction with the user. We propose a multi-modal interaction framework that allows to effectively acquire knowledge about the environment where the robot operates. In particular, in this paper we present a rich representation framework that can be automatically built from the metric map annotated with the indications provided by the user. Such a representation, allows then the robot to ground complex referential expressions for motion commands and to devise topological navigation plans to achieve the target locations.Comment: Knowledge Representation and Reasoning in Robotics Workshop at ICLP 201

    Fast indoor scene classification using 3D point clouds

    Full text link
    A representation of space that includes both geometric and semantic information enables a robot to perform high-level tasks in complex environments. Identifying and categorizing environments based on onboard sensors are essential in these scenarios. The Kinectā„¢, a 3D low cost sensor is appealing in these scenarios as it can provide rich information. The downside is the presence of large amount of information, which could lead to higher computational complexity. In this paper, we propose a methodology to efficiently classify indoor environments into semantic categories using Kinectā„¢ data. With a fast feature extraction method along with an efficient feature selection algorithm (DEFS) and, support vector machines (SVM) classifier, we could realize a fast scene classification algorithm. Experimental results in an indoor scenario are presented including comparisons with its counterpart of commonly available 2D laser range finder data
    • ā€¦
    corecore