46,552 research outputs found

    View-Invariant Regions and Mobile Robot Self-Localization

    Get PDF
    This paper addresses the problem of mobile robot self-localization given a polygonal map and a set of observed edge segments. The standard approach to this problem uses interpretation tree search with pruning heuristics to match observed edges to map edges. Our approach introduces a preprocessing step in which the map is decomposed into 'view-invariant regions' (VIRs). The VIR decomposition captures information about map edge visibility, and can be used for a variety of robot navigation tasks. Basing self-localization search on VIRs greatly reduces the branching factor of the search tree and thereby simplifies the search task. In this paper we define the VIR decomposition and give algorithms for its computation and for self-localization search. We present results of simulations comparing standard and VIR-based search, and discuss the application of the VIR decomposition to other problems in robot navigation

    Foresighted People Finding and Following

    Get PDF
    Mobile service robots are needed in several applications (e.g., transportation systems, autonomous shopping carts, household activities ... etc). In such scenarios the robot aids the user with tasks that require the robot to move freely across the environment in addition to direct interaction at certain times. Therefore, such a robot needs a strategy to quickly find the user whenever needed, in addition to a strategy that enables the robot to reason about the user's intended destination to be able to follow him in a foresighted manner if the user needs its help at that destination. In this dissertation, we tackle each of those problems separately in a divide and conquer manner. We present an approach to learn optimal navigation actions for assistance tasks in which the robot aims at efficiently reaching the final navigation goal of a human where service has to be provided. Always following the human at a close distance might hereby result in inefficient trajectories, since people regularly do not move on the shortest path to their destination (e.g., they may move to grab the phone or make a note). Therefore, a service robot should infer the human's intended navigation goal and compute its own motion based on that prediction. We propose to perform a prediction about the human's future movements and use this information in a reinforcement learning framework to generate foresighted navigation actions for the robot. Since frequent occlusions of the human will occur due to obstacles and the robot's constrained field of view, the estimate about the humans's position and the prediction of the next destination are affected by uncertainty. Our approach deals with such situations by explicitly considering occlusions in the reward function such that the robot automatically considers to execute actions to get the human in its field of view. We show in simulated and real-world experiments that our technique leads to significantly shorter paths compared to an approach in which the robot always tries to closely follow the user and, additionally, can handle occlusions. On the other side, an autonomous robot that directly helps users with certain tasks often first has to quickly find a user, especially when this person moves around frequently. A search method that relies on a greedy approach that do not perform any predictions about the user's most likely location, even when it is provided with background information about the frequently visited destinations of the user, might not be the best option. In this dissertation, we propose to compute the likelihood of the user's observability at each possible location in the environment based on simulations that rely on hidden Markov model based predictions. As the robot needs time to reach the search locations, we take this time into account as well as the visibility constraints. In this way we aim at selecting effective search locations for the robot to find the user as fast as possible. As our experiments in various simulated environments show, our approach leads to significantly shorter search times compared to the greedy approach

    An Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data

    Get PDF
    In this thesis, we introduce a novel architecture called Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data (iARTEC ) . The proposed architecture integrates different terrain characterization and classification with other robotic system components. Within iARTEC , we consider the problem of having a legged robot autonomously learn to identify different terrains. Robust terrain identification can be used to enhance the capabilities of legged robot systems, both in terms of locomotion and navigation. For example, a robot that has learned to differentiate sand from gravel can autonomously modify (or even select a different) path in favor of traversing over a better terrain. The same knowledge of the terrain type can also be used to guide a robot in order to avoid specific terrains. To tackle this problem, we developed four approaches for terrain characterization, classification, path planning, and control for a mobile legged robot. We developed a particle system inspired approach to estimate the robot footâ ground contact interaction forces. The approach is derived from the well known Bekkerâ s theory to estimate the contact forces based on its point contact model concepts. It is realistically model real-time 3-dimensional contact behaviors between rigid body objects and the soil. For a real-time capable implementation of this approach, its reformulated to use a lookup table generated from simple contact experiments of the robot foot with the terrain. Also, we introduced a short-range terrain classifier using the robot embodied data. The classifier is based on a supervised machine learning approach to optimize the classifier parameters and terrain it using proprioceptive sensor measurements. The learning framework preprocesses sensor data through channel reduction and filtering such that the classifier is trained on the feature vectors that are closely associated with terrain class. For the long-range terrain type prediction using the robot exteroceptive data, we present an online visual terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs). In addition, we described a terrain dependent navigation and path planning approach that is based on E* planer and employs a proposed metric that specifies the navigation costs associated terrain types. This generated path naturally avoids obstacles and favors terrains with lower values of the metric. At the low level, a proportional input-scaling controller is designed and implemented to autonomously steer the robot to follow the desired path in a stable manner. iARTEC performance was tested and validated experimentally using several different sensing modalities (proprioceptive and exteroceptive) and on the six legged robotic platform CREX. The results show that the proposed architecture integrating the aforementioned approaches with the robotic system allowed the robot to learn both robot-terrain interaction and remote terrain perception models, as well as the relations linking those models. This learning mechanism is performed according to the robot own embodied data. Based on the knowledge available, the approach makes use of the detected remote terrain classes to predict the most probable navigation behavior. With the assigned metric, the performance of the robot on a given terrain is predicted. This allows the navigation of the robot to be influenced by the learned models. Finally, we believe that iARTEC and the methods proposed in this thesis can likely also be implemented on other robot types (such as wheeled robots), although we did not test this option in our work
    • …
    corecore