148 research outputs found

    Efficient and intuitive teaching of redundant robots in task and configuration space

    Get PDF
    Emmerich C. Efficient and intuitive teaching of redundant robots in task and configuration space. Bielefeld: Universität Bielefeld; 2016.A major goal of current robotics research is to enable robots to become co-workers that learn from and collaborate with humans efficiently. This is of particular interest for small and medium-sized enterprises where small batch sizes and frequent changes in production needs demand a high flexibility in the manufacturing processes. A commonly adopted approach to accomplish this goal is the utilization of recently developed lightweight, compliant and kinematically redundant robot platforms in combination with state-of-the-art human-robot interfaces. However, the increased complexity of these robots is not well reflected in most interfaces as the work at hand points out. Plain kinesthetic teaching, a typical attempt to enable lay users programming a robot by physically guiding it through a motion demonstration, not only imposes high cognitive load on the tutor, particularly in the presence of strong environmental constraints. It also neglects the possible reuse of (task-independent) constraints on the redundancy resolution as these have to be demonstrated repeatedly or are modeled explicitly reducing the efficiency of these methods when targeted at non-expert users. In contrast, this thesis promotes a different view investigating human-robot interaction schemes not only from the learner’s but also from the tutor’s perspective. A two-staged interaction structure is proposed that enables lay users to transfer their implicit knowledge about task and environmental constraints incrementally and independently of each other to the robot, and to reuse this knowledge by means of assisted programming controllers. In addition, a path planning approach is derived by properly exploiting the knowledge transfer enabling autonomous navigation in a possibly confined workspace without any cameras or other external sensors. All derived concept are implemented and evaluated thoroughly on a system prototype utilizing the 7-DoF KUKA Lightweight Robot IV. Results of a large user study conducted in the context of this thesis attest the staged interaction to reduce the complexity of teaching redundant robots and show that teaching redundancy resolutions is feasible also for non-expert users. Utilizing properly tailored machine learning algorithms the proposed approach is completely data-driven. Hence, despite a required forward kinematic mapping of the manipulator the entire approach is model-free allowing to implement the derived concepts on a variety of currently available robot platforms

    Optimizing Programming by Demonstration for in-contact task models by Incremental Learning

    Get PDF
    Despite the increasing usage of robots for industrial applications, many aspects prevent robots from being used in daily life. One of these aspects is that extensive knowledge in programming a robot is necessary to make the robot achieve a desired task. Conventional robot programming is complex, time consuming and expensive, as every aspect of a task has to be considered. Novel intuitive and easy to use methods to program robots are necessary to facilitate the usage in daily life. This thesis proposes an approach that allows a novice user to program a robot by demonstration and provides assistance to incrementally refine the trained skill. The user utilizes kinesthetic teaching to provide an initial demonstration to the robot. Based on the information extracted from this demonstration the robot starts executing the demonstrated task. The assistance system allows the user to train the robot during the execution and thus refine the model of the task. Experiments with a KUKA LWR4+ industrial robot evaluate the performance of the assistance system and advantages over unassisted approaches. Furthermore a user study is performed to evaluate the interaction between a novice user and robot

    A user study on personalized stiffness control and task specificity in physical Human-Robot Interaction

    Get PDF
    Gopinathan S, Ötting SK, Steil JJ. A user study on personalized stiffness control and task specificity in physical Human-Robot Interaction. Frontiers in Robotics and AI. 2017;4: 58.An ideal physical human–robot interaction (pHRI) should offer the users robotic systems that are easy to handle, intuitive to use, ergonomic and adaptive to human habits and preferences. But the variance in the user behavior is often high and rather unpredictable, which hinders the development of such systems. This article introduces a Personalized Adaptive Stiffness controller for pHRI that is calibrated for the user’s force profile and validates its performance in an extensive user study with 49 participants on two different tasks. The user study compares the new scheme to conventional fixed stiffness or gravitation compensation controllers on the 7-DOF KUKA LWR IVb by employing two typical joint-manipulation tasks. The results clearly point out the importance of considering task specific parameters and human specific parameters while designing control modes for pHRI. The analysis shows that for simpler tasks a standard fixed controller may perform sufficiently well and that respective task dependency strongly prevails over individual differences. In the more complex task, quantitative and qualitative results reveal differences between the respective control modes, where the Personalized Adaptive Stiffness controller excels in terms of both performance gain and user preference. Further analysis shows that human and task parameters can be combined and quantified by considering the manipulability of a simplified human arm model. The analysis of user’s interaction force profiles confirms this finding

    Personalization and Adaptation in Physical Human-Robot Interaction

    Get PDF
    Gopinathan S. Personalization and Adaptation in Physical Human-Robot Interaction. Bielefeld: Universität Bielefeld; 2019.Recent advancements in physical human-robot interaction (pHRI) makes it possible for compliant robots to assist the human counterpart while closely working together. An ideal control mode designed for pHRI should be easy to handle, intuitive to use, ergonomic and adaptive to human habits and preferences. The major stumbling block in achieving this is that each user has varying physical capabilities and characteristics. This variance in the user behavior and other features is often high and rather unpredictable, which hinders the development of such systems. To tackle this problem, the idea of personalized adaptive stiffness control for pHRI is introduced in this thesis. Extensive user-studies are conducted in scope of this thesis and various control modes for pHRI are proposed and evaluated using appropriate user-studies. Both naive and expert users were considered in the user-studies and inferences from each study were used to improve the control mode to be better suited for pHRI. The thesis follows a meticulous research plan, an initial user-study confirms the im- portance of pHRI and kinesthetic guidance in industrial tasks. Subsequently, the user interactive force based adaptation is proposed and a second user-study is conducted where it is compared with standard control modes for pHRI. Importance of task specific param- eters and the need for combining the task and human factors emerged from the results of the second user-study. In the next phase manipulability based approaches which com- bine both task and human parameters are proposed and validated by conducting a third user-study. In the final phase a fourth user-study is conducted where the proposed con- trol modes are compared against more complex methods that have been proposed in the literature. The importance of human physical factors and needs for human centered systems for pHRI is validated in this thesis. The results show that including these human factors not only improve the performance but also improves the interaction quality and reduces the complexity of the pHRI

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Robotic learning of force-based industrial manipulation tasks

    Get PDF
    Even with the rapid technological advancements, robots are still not the most comfortable machines to work with. Firstly, due to the separation of the robot and human workspace which imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is a significant need to reduce the programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. Hence, the robot must be equipped with a cognitive and perceptual capabilities that facilitate human-robot interaction. Humans use their various senses to perform tasks such as vision, smell and taste. One sensethat plays a significant role in human activity is ’touch’ or ’force’. For example, holding a cup of tea, or making fine adjustments while inserting a key requires haptic information to achieve the task successfully. In all these examples, force and torque data are crucial for the successful completion of the activity. Also, this information implicitly conveys data about contact force, object stiffness, and many others. Hence, a deep understanding of the execution of such events can bridge the gap between humans and robots. This thesis is being directed to equip an industrial robot with the ability to deal with force perceptions and then learn force-based tasks using Learning from Demonstration (LfD).To learn force-based tasks using LfD, it is essential to extract task-relevant features from the force information. Then, knowledge must be extracted and encoded form the task-relevant features. Hence, the captured skills can be reproduced in a new scenario. In this thesis, these elements of LfD were achieved using different approaches based on the demonstrated task. In this thesis, four robotics problems were addressed using LfD framework. The first challenge was to filter out robots’ internal forces (irrelevant signals) using data-driven approach. The second robotics challenge was the recognition of the Contact State (CS) during assembly tasks. To tackle this challenge, a symbolic based approach was proposed, in which a force/torque signals; during demonstrated assembly, the task was encoded as a sequence of symbols. The third challenge was to learn a human-robot co-manipulation task based on LfD. In this case, an ensemble machine learning approach was proposed to capture such a skill. The last challenge in this thesis, was to learn an assembly task by demonstration with the presents of parts geometrical variation. Hence, a new learning approach based on Artificial Potential Field (APF) to learn a Peg-in-Hole (PiH) assembly task which includes no-contact and contact phases. To sum up, this thesis focuses on the use of data-driven approaches to learning force based task in an industrial context. Hence, different machine learning approaches were implemented, developed and evaluated in different scenarios. Then, the performance of these approaches was compared with mathematical modelling based approaches.</div

    A survey of robot manipulation in contact

    Get PDF
    In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks

    Robot manipulator skill learning and generalising through teleoperation

    Get PDF
    Robot manipulators have been widely used for simple repetitive, and accurate tasks in industrial plants, such as pick and place, assembly and welding etc., but it is still hard to deploy in human-centred environments for dexterous manipulation tasks, such as medical examination and robot-assisted healthcare. These tasks are not only related to motion planning and control but also to the compliant interaction behaviour of robots, e.g. motion control, force regulation and impedance adaptation simultaneously under dynamic and unknown environments. Recently, with the development of collaborative robotics (cobots) and machine learning, robot skill learning and generalising have attained increasing attention from robotics, machine learning and neuroscience communities. Nevertheless, learning complex and compliant manipulation skills, such as manipulating deformable objects, scanning the human body and folding clothes, is still challenging for robots. On the other hand, teleoperation, also namely remote operation or telerobotics, has been an old research area since 1950, and there have been a number of applications such as space exploration, telemedicine, marine vehicles and emergency response etc. One of its advantages is to combine the precise control of robots with human intelligence to perform dexterous and safety-critical tasks from a distance. In addition, telepresence allows remote operators could feel the actual interaction between the robot and the environment, including the vision, sound and haptic feedback etc. Especially under the development of various augmented reality (AR), virtual reality (VR) and wearable devices, intuitive and immersive teleoperation have received increasing attention from robotics and computer science communities. Thus, various human-robot collaboration (HRC) interfaces based on the above technologies were developed to integrate robot control and telemanipulation by human operators for robot skills learning from human beings. In this context, robot skill learning could benefit teleoperation by automating repetitive and tedious tasks, and teleoperation demonstration and interaction by human teachers also allow the robot to learn progressively and interactively. Therefore, in this dissertation, we study human-robot skill transfer and generalising through intuitive teleoperation interfaces for contact-rich manipulation tasks, including medical examination, manipulating deformable objects, grasping soft objects and composite layup in manufacturing. The introduction, motivation and objectives of this thesis are introduced in Chapter 1. In Chapter 2, a literature review on manipulation skills acquisition through teleoperation is carried out, and the motivation and objectives of this thesis are discussed subsequently. Overall, the main contents of this thesis have three parts: Part 1 (Chapter 3) introduces the development and controller design of teleoperation systems with multimodal feedback, which is the foundation of this project for robot learning from human demonstration and interaction. In Part 2 (Chapters 4, 5, 6 and 7), we studied primitive skill library theory, behaviour tree-based modular method, and perception-enhanced method to improve the generalisation capability of learning from the human demonstrations. And several applications were employed to evaluate the effectiveness of these methods.In Part 3 (Chapter 8), we studied the deep multimodal neural networks to encode the manipulation skill, especially the multimodal perception information. This part conducted physical experiments on robot-assisted ultrasound scanning applications.Chapter 9 summarises the contributions and potential directions of this thesis. Keywords: Learning from demonstration; Teleoperation; Multimodal interface; Human-in-the-loop; Compliant control; Human-robot interaction; Robot-assisted sonography

    Robotic Trajectory Tracking: Position- and Force-Control

    Get PDF
    This thesis employs a bottom-up approach to develop robust and adaptive learning algorithms for trajectory tracking: position and torque control. In a first phase, the focus is put on the following of a freeform surface in a discontinuous manner. Next to resulting switching constraints, disturbances and uncertainties, the case of unknown robot models is addressed. In a second phase, once contact has been established between surface and end effector and the freeform path is followed, a desired force is applied. In order to react to changing circumstances, the manipulator needs to show the features of an intelligent agent, i.e. it needs to learn and adapt its behaviour based on a combination of a constant interaction with its environment and preprogramed goals or preferences. The robotic manipulator mimics the human behaviour based on bio-inspired algorithms. In this way it is taken advantage of the know-how and experience of human operators as their knowledge is translated in robot skills. A selection of promising concepts is explored, developed and combined to extend the application areas of robotic manipulators from monotonous, basic tasks in stiff environments to complex constrained processes. Conventional concepts (Sliding Mode Control, PID) are combined with bio-inspired learning (BELBIC, reinforcement based learning) for robust and adaptive control. Independence of robot parameters is guaranteed through approximated robot functions using a Neural Network with online update laws and model-free algorithms. The performance of the concepts is evaluated through simulations and experiments. In complex freeform trajectory tracking applications, excellent absolute mean position errors (<0.3 rad) are achieved. Position and torque control are combined in a parallel concept with minimized absolute mean torque errors (<0.1 Nm)
    • …
    corecore