243 research outputs found

    Efficient and intuitive teaching of redundant robots in task and configuration space

    Get PDF
    Emmerich C. Efficient and intuitive teaching of redundant robots in task and configuration space. Bielefeld: Universität Bielefeld; 2016.A major goal of current robotics research is to enable robots to become co-workers that learn from and collaborate with humans efficiently. This is of particular interest for small and medium-sized enterprises where small batch sizes and frequent changes in production needs demand a high flexibility in the manufacturing processes. A commonly adopted approach to accomplish this goal is the utilization of recently developed lightweight, compliant and kinematically redundant robot platforms in combination with state-of-the-art human-robot interfaces. However, the increased complexity of these robots is not well reflected in most interfaces as the work at hand points out. Plain kinesthetic teaching, a typical attempt to enable lay users programming a robot by physically guiding it through a motion demonstration, not only imposes high cognitive load on the tutor, particularly in the presence of strong environmental constraints. It also neglects the possible reuse of (task-independent) constraints on the redundancy resolution as these have to be demonstrated repeatedly or are modeled explicitly reducing the efficiency of these methods when targeted at non-expert users. In contrast, this thesis promotes a different view investigating human-robot interaction schemes not only from the learner’s but also from the tutor’s perspective. A two-staged interaction structure is proposed that enables lay users to transfer their implicit knowledge about task and environmental constraints incrementally and independently of each other to the robot, and to reuse this knowledge by means of assisted programming controllers. In addition, a path planning approach is derived by properly exploiting the knowledge transfer enabling autonomous navigation in a possibly confined workspace without any cameras or other external sensors. All derived concept are implemented and evaluated thoroughly on a system prototype utilizing the 7-DoF KUKA Lightweight Robot IV. Results of a large user study conducted in the context of this thesis attest the staged interaction to reduce the complexity of teaching redundant robots and show that teaching redundancy resolutions is feasible also for non-expert users. Utilizing properly tailored machine learning algorithms the proposed approach is completely data-driven. Hence, despite a required forward kinematic mapping of the manipulator the entire approach is model-free allowing to implement the derived concepts on a variety of currently available robot platforms

    Path planning for a redundant robot manipulator using sparse demonstration data

    Get PDF
    Seidel D. Path planning for a redundant robot manipulator using sparse demonstration data. Bielefeld: Bielefeld University; 2014.The ability to plan and execute of movements to accomplish tasks is a fundamental requirement for all types of robot, whether in industrial or in research applications. This Master Thesis addresses path planning for redundant robot platforms. The research targets two major goals. The first is to bypass the need for an explicit representation of a robot's environment, which is strained with sophisticated computations as well as required expert knowledge. This bypass allows for a considerably more flexible use of a robot, being able to adapt its path planning data to an arbitrary new environment within minutes. The second goal is to provide a real-time capable path planning method, that utilizes the advantages of redundant robot platforms and handles the increased complexity of such systems. These goals are achieved by introducing kinesthetic teaching into path planning, which has already proven to be a successful improvement for single task methods dealing with redundancy resolution. The thesis proposes an approach utilizing a topological neural network algorithm to construct an internal representation of a robot's workspace based on input data obtained from physical guidance of the robot by a user. In order to create feasible and safe movements, information from both configuration space of the robot and task space are employed. The algorithm is extended by heuristics to improve its results for the intended scenario. This modified network construction algorithm constructs a navigation graph similar to classical approaches with explicit modeling. It can be processed by means of conventional search algorithms from graph theory to generate paths between two arbitrary points in the workspace

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Interactive Imitation Learning in Robotics: A Survey

    Full text link
    Interactive Imitation Learning (IIL) is a branch of Imitation Learning (IL) where human feedback is provided intermittently during robot execution allowing an online improvement of the robot's behavior. In recent years, IIL has increasingly started to carve out its own space as a promising data-driven alternative for solving complex robotic tasks. The advantages of IIL are its data-efficient, as the human feedback guides the robot directly towards an improved behavior, and its robustness, as the distribution mismatch between the teacher and learner trajectories is minimized by providing feedback directly over the learner's trajectories. Nevertheless, despite the opportunities that IIL presents, its terminology, structure, and applicability are not clear nor unified in the literature, slowing down its development and, therefore, the research of innovative formulations and discoveries. In this article, we attempt to facilitate research in IIL and lower entry barriers for new practitioners by providing a survey of the field that unifies and structures it. In addition, we aim to raise awareness of its potential, what has been accomplished and what are still open research questions. We organize the most relevant works in IIL in terms of human-robot interaction (i.e., types of feedback), interfaces (i.e., means of providing feedback), learning (i.e., models learned from feedback and function approximators), user experience (i.e., human perception about the learning process), applications, and benchmarks. Furthermore, we analyze similarities and differences between IIL and RL, providing a discussion on how the concepts offline, online, off-policy and on-policy learning should be transferred to IIL from the RL literature. We particularly focus on robotic applications in the real world and discuss their implications, limitations, and promising future areas of research

    A survey of robot manipulation in contact

    Get PDF
    In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Spatial and Temporal Learning in Robotic Pick-and-Place Domains via Demonstrations and Observations

    Get PDF
    Traditional methods for Learning from Demonstration require users to train the robot through the entire process, or to provide feedback throughout a given task. These previous methods have proved to be successful in a selection of robotic domains; however, many are limited by the ability of the user to effectively demonstrate the task. In many cases, noisy demonstrations or a failure to understand the underlying model prevent these methods from working with a wider range of non-expert users. My insight is that in many mobile pick-and-place domains, teaching is done at a too fine grained level. In many such tasks, users are solely concerned with the end goal. This implies that the complexity and time associated with training and teaching robots through the entirety of the task is unnecessary. The robotic agent needs to know (1) a probable search location to retrieve the task\u27s objects and (2) how to arrange the items to complete the task. This thesis work develops new techniques for obtaining such data from high-level spatial and temporal observations and demonstrations which can later be applied in new, unseen environments. This thesis makes the following contributions: (1) This work is built on a crowd robotics platform and, as such, we contribute the development of efficient data streaming techniques to further these capabilities. By doing so, users can more easily interact with robots on a number of platforms. (2) The presentation of new algorithms that can learn pick-and-place tasks from a large corpus of goal templates. My work contributes algorithms that produce a metric which ranks the appropriate frame of reference for each item based solely on spatial demonstrations. (3) An algorithm which can enhance the above templates with ordering constraints using coarse and noisy temporal information. Such a method eliminates the need for a user to explicitly specify such constraints and searches for an optimal ordering and placement of items. (4) A novel algorithm which is able to learn probable search locations of objects based solely on sparsely made temporal observations. For this, we introduce persistence models of objects customized to a user\u27s environment

    Affordance-Based Human-Robot Interaction With Reinforcement Learning

    Get PDF
    Planning precise manipulation in robotics to perform grasp and release-related operations, while interacting with humans is a challenging problem. Reinforcement learning (RL) has the potential to make robots attain this capability. In this paper, we propose an affordance-based human-robot interaction (HRI) framework, aiming to reduce the action space size that would considerably impede the exploration efficiency of the agent. The framework is based on a new algorithm called Contextual Q-learning (CQL). We first show that the proposed algorithm trains in a reduced amount of time (2.7 seconds) and reaches an 84% of success rate. This suits the robot’s learning efficiency to observe the current scenario configuration and learn to solve it. Then, we empirically validate the framework for implementation in HRI real-world scenarios. During the HRI, the robot uses semantic information from the state and the optimal policy of the last training step to search for relevant changes in the environment that may trigger the generation of a new policy
    • …
    corecore