84 research outputs found

    On the completeness of ensembles of motion planners for decentralized planning

    Get PDF
    We provide a set of sufficient conditions to establish the completeness of an ensemble of motion planners-that is, a set of loosely-coupled motion planners that produce a unified result. The planners are assumed to divide the total planning problem across some parameter space(s), such as task space, state space, action space, or time. Robotic applications have employed ensembles of planners for decades, although the concept has not been formally unified or analyzed until now. We focus on applications in multi-robot navigation and collision avoidance. We show that individual resolutionor probabilistically-complete planners that meet certain communication criteria constitute a (respectively, resolution- or probabilistically-) complete ensemble of planners. This ensemble of planners, in turn, guarantees that the robots are free of deadlock, livelock, and starvation.Boeing Compan

    IkeaBot: An autonomous multi-robot coordinated furniture assembly system

    Get PDF
    We present an automated assembly system that directs the actions of a team of heterogeneous robots in the completion of an assembly task. From an initial user-supplied geometric specification, the system applies reasoning about the geometry of individual parts in order to deduce how they fit together. The task is then automatically transformed to a symbolic description of the assembly-a sort of blueprint. A symbolic planner generates an assembly sequence that can be executed by a team of collaborating robots. Each robot fulfills one of two roles: parts delivery or parts assembly. The latter are equipped with specialized tools to aid in the assembly process. Additionally, the robots engage in coordinated co-manipulation of large, heavy assemblies. We provide details of an example furniture kit assembled by the system.Boeing Compan

    Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning

    Full text link
    We introduce a method for following high-level navigation instructions by mapping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control. The Grounded Semantic Mapping Network (GSMN) is a fully-differentiable neural network architecture that builds an explicit semantic map in the world reference frame by incorporating a pinhole camera projection model within the network. The information stored in the map is learned from experience, while the local-to-world transformation is computed explicitly. We train the model using DAggerFM, a modified variant of DAgger that trades tabular convergence guarantees for improved training speed and memory use. We test GSMN in virtual environments on a realistic quadcopter simulator and show that incorporating an explicit mapping and grounding modules allows GSMN to outperform strong neural baselines and almost reach an expert policy performance. Finally, we analyze the learned map representations and show that using an explicit map leads to an interpretable instruction-following model.Comment: To appear in Robotics: Science and Systems (RSS), 201

    Single assembly robot in search of human partner: Versatile grounded language generation

    Get PDF
    We describe an approach for enabling robots to recover from failures by asking for help from a human partner. For example, if a robot fails to grasp a needed part during a furniture assembly task, it might ask a human partner to “Please hand me the white table leg near you.” After receiving the part from the human, the robot can recover from its grasp failure and continue the task autonomously. This paper describes an approach for enabling a robot to automatically generate a targeted natural language request for help from a human partner. The robot generates a natural language description of its need by minimizing the entropy of the command with respect to its model of language understanding for the human partner, a novel approach to grounded language generation. Our long-term goal is to compare targeted requests for help to more open-ended requests where the robot simply asks “Help me,” demonstrating that targeted requests are more easily understood by human partners
    • …
    corecore