88 research outputs found

    Impact of Human Communication in a Multi-teacher, Multi-robot Learning by Demonstration System.

    No full text
    A wide range of architectures have been proposed within the areas of learning by demonstration and multi-robot coordination. These areas share a common issue: how humans and robots share information and knowledge among themselves. This paper analyses the impact of communication between human teachers during simultaneous demonstration of task execution in the novel Multi-robot Learning by Demonstration domain, using the MRLbD architecture. The performance is analysed in terms of time to task completion, as well as the quality of the multi-robot joint action plans. Participants with different levels of skills taught real robots solutions for a furniture moving task through teleoperation. The experimental results provided evidence that explicit communication between teachers does not necessarily reduce the time to complete a task, but contributes to the synchronisation of manoeuvres, thus enhancing the quality of the joint action plans generated by the MRLbD architecture

    A Framework for Learning by Demonstration in Multi-teacher Multi-robot Scenarios

    No full text
    As robots become more accessible to humans, more intuitive and human-friendly ways of programming them with interactive and group-aware behaviours are needed. This thesis addresses the gap between Learning by Demonstration and Multi-robot systems. In particular, this thesis tackles the fundamental problem of learning multi-robot cooperative behaviour from concurrent multi-teacher demonstrations, problem which had not been addressed prior to this work. The core contribution of this thesis is the design and implementation of a novel, multi- layered framework for multi-robot learning from simultaneous demonstrations, capable of deriving control policies at two different levels of abstraction. The lower level learns models of joint-actions at trajectory level, adapting such models to new scenarios via feature mapping. The higher level extracts the structure of cooperative tasks at symbolic level, generating a sequence of robot actions composing multi-robot plans. To the best of the author's knowledge, the proposed framework is the first Learning by Demonstration system to enable multiple human demonstrators to simultaneously teach group behaviour to multiple robots learners. A series of experimental tests were conducted using real robots in a real human workspace environment. The results obtained from a comprehensive comparison confirm the appli- cability of the joint-action model adaptation method utilised. What is more, the results of several trials provide evidence that the proposed framework effectively extracts rea- sonable multi-robot plans from demonstrations. In addition, a case study of the impact of human communication when using the proposed framework was conducted, suggesting no evidence that communication affects the time to completion of a task, but may have a positive effect on the extraction multi-robot plans. Furthermore, a multifaceted user study was conducted to analyse the aspects of user workload and focus of attention, as well as to evaluate the usability of the teleoperation system, highlighting which parts were necessary to be improved

    Advancing Robot Autonomy for Long-Horizon Tasks

    Full text link
    Autonomous robots have real-world applications in diverse fields, such as mobile manipulation and environmental exploration, and many such tasks benefit from a hands-off approach in terms of human user involvement over a long task horizon. However, the level of autonomy achievable by a deployment is limited in part by the problem definition or task specification required by the system. Task specifications often require technical, low-level information that is unintuitive to describe and may result in generic solutions, burdening the user technically both before and after task completion. In this thesis, we aim to advance task specification abstraction toward the goal of increasing robot autonomy in real-world scenarios. We do so by tackling problems that address several different angles of this goal. First, we develop a way for the automatic discovery of optimal transition points between subtasks in the context of constrained mobile manipulation, removing the need for the human to hand-specify these in the task specification. We further propose a way to automatically describe constraints on robot motion by using demonstrated data as opposed to manually-defined constraints. Then, within the context of environmental exploration, we propose a flexible task specification framework, requiring just a set of quantiles of interest from the user that allows the robot to directly suggest locations in the environment for the user to study. We next systematically study the effect of including a robot team in the task specification and show that multirobot teams have the ability to improve performance under certain specification conditions, including enabling inter-robot communication. Finally, we propose methods for a communication protocol that autonomously selects useful but limited information to share with the other robots.Comment: PhD dissertation. 160 page

    Towards Rapid Multi-robot Learning from Demonstration at the RoboCup Competition

    Full text link
    Abstract. We describe our previous and current efforts towards achiev-ing an unusual personal RoboCup goal: to train a full team of robots directly through demonstration, on the field of play at the RoboCup venue, how to collaboratively play soccer, and then use this trained team in the competition itself. Using our method, HiTAB, we can train teams of collaborative agents via demonstration to perform nontrivial joint behaviors in the form of hierarchical finite-state automata. We discuss HiTAB, our previous efforts in using it in RoboCup 2011 and 2012, recent experimental work, and our current efforts for 2014, then suggest a new RoboCup Technical Challenge problem in learning from demonstration. Imagine that you are at an unfamiliar disaster site with a team of robots, and are faced with a previously unseen task for them to do. The robots have only rudimentary but useful utility behaviors implemented. You are not a programmer. Without coding them, you have only a few hours to get your robots doing useful collaborative work in this new environment. How would you do this

    Spatial representation for planning and executing robot behaviors in complex environments

    Get PDF
    Robots are already improving our well-being and productivity in different applications such as industry, health-care and indoor service applications. However, we are still far from developing (and releasing) a fully functional robotic agent that can autonomously survive in tasks that require human-level cognitive capabilities. Robotic systems on the market, in fact, are designed to address specific applications, and can only run pre-defined behaviors to robustly repeat few tasks (e.g., assembling objects parts, vacuum cleaning). They internal representation of the world is usually constrained to the task they are performing, and does not allows for generalization to other scenarios. Unfortunately, such a paradigm only apply to a very limited set of domains, where the environment can be assumed to be static, and its dynamics can be handled before deployment. Additionally, robots configured in this way will eventually fail if their "handcrafted'' representation of the environment does not match the external world. Hence, to enable more sophisticated cognitive skills, we investigate how to design robots to properly represent the environment and behave accordingly. To this end, we formalize a representation of the environment that enhances the robot spatial knowledge to explicitly include a representation of its own actions. Spatial knowledge constitutes the core of the robot understanding of the environment, however it is not sufficient to represent what the robot is capable to do in it. To overcome such a limitation, we formalize SK4R, a spatial knowledge representation for robots which enhances spatial knowledge with a novel and "functional" point of view that explicitly models robot actions. To this end, we exploit the concept of affordances, introduced to express opportunities (actions) that objects offer to an agent. To encode affordances within SK4R, we define the "affordance semantics" of actions that is used to annotate an environment, and to represent to which extent robot actions support goal-oriented behaviors. We demonstrate the benefits of a functional representation of the environment in multiple robotic scenarios that traverse and contribute different research topics relating to: robot knowledge representations, social robotics, multi-robot systems and robot learning and planning. We show how a domain-specific representation, that explicitly encodes affordance semantics, provides the robot with a more concrete understanding of the environment and of the effects that its actions have on it. The goal of our work is to design an agent that will no longer execute an action, because of mere pre-defined routine, rather, it will execute an actions because it "knows'' that the resulting state leads one step closer to success in its task

    Human-robot Interaction For Multi-robot Systems

    Get PDF
    Designing an effective human-robot interaction paradigm is particularly important for complex tasks such as multi-robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that the robots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not automatically improve performance due to the difficulty of teleoperating mobile robots with manipulators. The human operator’s concentration is divided not only among multiple robots but also between controlling each robot’s base and arm. This complexity substantially increases the potential neglect time, since the operator’s inability to effectively attend to each robot during a critical phase of the task leads to a significant degradation in task performance. There are several proven paradigms for increasing the efficacy of human-robot interaction: 1) multimodal interfaces in which the user controls the robots using voice and gesture; 2) configurable interfaces which allow the user to create new commands by demonstrating them; 3) adaptive interfaces which reduce the operator’s workload as necessary through increasing robot autonomy. This dissertation presents an evaluation of the relative benefits of different types of user interfaces for multi-robot systems composed of robots with wheeled bases and three degree of freedom arms. It describes a design for constructing low-cost multi-robot manipulation systems from off the shelf parts. User expertise was measured along three axes (navigation, manipulation, and coordination), and participants who performed above threshold on two out of three dimensions on a calibration task were rated as expert. Our experiments reveal that the relative expertise of the user was the key determinant of the best performing interface paradigm for that user, indicating that good user modiii eling is essential for designing a human-robot interaction system that will be used for an extended period of time. The contributions of the dissertation include: 1) a model for detecting operator distraction from robot motion trajectories; 2) adjustable autonomy paradigms for reducing operator workload; 3) a method for creating coordinated multi-robot behaviors from demonstrations with a single robot; 4) a user modeling approach for identifying expert-novice differences from short teleoperation traces

    Learning Dynamic Priority Scheduling Policies with Graph Attention Networks

    Get PDF
    The aim of this thesis is to develop novel graph attention network-based models to automatically learn scheduling policies for effectively solving resource optimization problems, covering both deterministic and stochastic environments. The policy learning methods utilize both imitation learning, when expert demonstrations are accessible at low cost, and reinforcement learning, when otherwise reward engineering is feasible. By parameterizing the learner with graph attention networks, the framework is computationally efficient and results in scalable resource optimization schedulers that adapt to various problem structures. This thesis addresses the problem of multi-robot task allocation (MRTA) under temporospatial constraints. Initially, robots with deterministic and homogeneous task performance are considered with the development of the RoboGNN scheduler. Then, I develop ScheduleNet, a novel heterogeneous graph attention network model, to efficiently reason about coordinating teams of heterogeneous robots. Next, I address problems under the more challenging stochastic setting in two parts. Part 1) Scheduling with stochastic and dynamic task completion times. The MRTA problem is extended by introducing human coworkers with dynamic learning curves and stochastic task execution. HybridNet, a hybrid network structure, has been developed that utilizes a heterogeneous graph-based encoder and a recurrent schedule propagator, to carry out fast schedule generation in multi-round settings. Part 2) Scheduling with stochastic and dynamic task arrival and completion times. With an application in failure-predictive plane maintenance, I develop a heterogeneous graph-based policy optimization (HetGPO) approach to enable learning robust scheduling policies in highly stochastic environments. Through extensive experiments, the proposed framework has been shown to outperform prior state-of-the-art algorithms in different applications. My research contributes several key innovations regarding designing graph-based learning algorithms in operations research.Ph.D

    BWIBots: A platform for bridging the gap between AI and human–robot interaction research

    Get PDF
    Recent progress in both AI and robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially human–robot interaction for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (a) execute action sequences to complete user requests, (b) efficiently ask questions to resolve user requests, (c) understand human commands given in natural language, and (d) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform
    • …
    corecore