227 research outputs found

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    Robot Learning from Human Demonstrations for Human-Robot Synergy

    Get PDF
    Human-robot synergy enables new developments in industrial and assistive robotics research. In recent years, collaborative robots can work together with humans to perform a task, while sharing the same workplace. However, the teachability of robots is a crucial factor, in order to establish the role of robots as human teammates. Robots require certain abilities, such as easily learning diversified tasks and adapting to unpredicted events. The most feasible method, which currently utilizes human teammate to teach robots how to perform a task, is the Robot Learning from Demonstrations (RLfD). The goal of this method is to allow non-expert users to a programa a robot by simply guiding the robot through a task. The focus of this thesis is on the development of a novel framework for Robot Learning from Demonstrations that enhances the robotsa abilities to learn and perform the sequences of actions for object manipulation tasks (high-level learning) and, simultaneously, learn and adapt the necessary trajectories for object manipulation (low-level learning). A method that automatically segments demonstrated tasks into sequences of actions is developed in this thesis. Subsequently, the generated sequences of actions are employed by a Reinforcement Learning (RL) from human demonstration approach to enable high-level robot learning. The low-level robot learning consists of a novel method that selects similar demonstrations (in case of multiple demonstrations of a task) and the Gaussian Mixture Model (GMM) method. The developed robot learning framework allows learning from single and multiple demonstrations. As soon as the robot has the knowledge of a demonstrated task, it can perform the task in cooperation with the human. However, the need for adaptation of the learned knowledge may arise during the human-robot synergy. Firstly, Interactive Reinforcement Learning (IRL) is employed as a decision support method to predict the sequence of actions in real-time, to keep the human in the loop and to enable learning the usera s preferences. Subsequently, a novel method that modifies the learned Gaussian Mixture Model (m-GMM) is developed in this thesis. This method allows the robot to cope with changes in the environment, such as objects placed in a different from the demonstrated pose or obstacles, which may be introduced by the human teammate. The modified Gaussian Mixture Model is further used by the Gaussian Mixture Regression (GMR) to generate a trajectory, which can efficiently control the robot. The developed framework for Robot Learning from Demonstrations was evaluated in two different robotic platforms: a dual-arm industrial robot and an assistive robotic manipulator. For both robotic platforms, small studies were performed for industrial and assistive manipulation tasks, respectively. Several Human-Robot Interaction (HRI) methods, such as kinesthetic teaching, gamepad or a hands-freea via head gestures, were used to provide the robot demonstrations. The a hands-freea HRI enables individuals with severe motor impairments to provide a demonstration of an assistive task. The experimental results demonstrate the potential of the developed robot learning framework to enable continuous humana robot synergy in industrial and assistive applications

    Incremental motor skill learning and generalization from human dynamic reactions based on dynamic movement primitives and fuzzy logic system

    Get PDF
    Different from previous work on single skill learning from human demonstrations, an incremental motor skill learning, generalization and control method based on dynamic movement primitives (DMP) and broad learning system (BLS) is proposed for extracting both ordinary skills and instant reactive skills from demonstrations, the latter of which is usually generated to avoid a sudden danger (e.g., touching a hot cup). The method is completed in three steps. First, ordinary skills are basically learned from demonstrations in normal cases by using DMP. Then the incremental learning idea of BLS is combined with DMP to achieve multi-stylistic reactive skill learning such that the forcing function of the ordinary skills will be reasonably extended into multiple stylistic functions by adding enhancement terms and updating weights of the radial basis function (RBF) kernels. Finally, electromyography (EMG) signals are collected from human muscles and processed to achieve stiffness factors. By using fuzzy logic system (FLS), the two kinds of skills learned are integrated and generalized in new cases such that not only start, end and scaling factors but also the environmental conditions, robot reactive strategies and impedance control factors will be generalized to lead to various reactions. To verify the effectiveness of the proposed method, an obstacle avoidance experiment that enables robots to approach destinations flexibly in various situations with barriers will be undertaken

    Intuitive Instruction of Industrial Robots : A Knowledge-Based Approach

    Get PDF
    With more advanced manufacturing technologies, small and medium sized enterprises can compete with low-wage labor by providing customized and high quality products. For small production series, robotic systems can provide a cost-effective solution. However, for robots to be able to perform on par with human workers in manufacturing industries, they must become flexible and autonomous in their task execution and swift and easy to instruct. This will enable small businesses with short production series or highly customized products to use robot coworkers without consulting expert robot programmers. The objective of this thesis is to explore programming solutions that can reduce the programming effort of sensor-controlled robot tasks. The robot motions are expressed using constraints, and multiple of simple constrained motions can be combined into a robot skill. The skill can be stored in a knowledge base together with a semantic description, which enables reuse and reasoning. The main contributions of the thesis are 1) development of ontologies for knowledge about robot devices and skills, 2) a user interface that provides simple programming of dual-arm skills for non-experts and experts, 3) a programming interface for task descriptions in unstructured natural language in a user-specified vocabulary and 4) an implementation where low-level code is generated from the high-level descriptions. The resulting system greatly reduces the number of parameters exposed to the user, is simple to use for non-experts and reduces the programming time for experts by 80%. The representation is described on a semantic level, which means that the same skill can be used on different robot platforms. The research is presented in seven papers, the first describing the knowledge representation and the second the knowledge-based architecture that enables skill sharing between robots. The third paper presents the translation from high-level instructions to low-level code for force-controlled motions. The two following papers evaluate the simplified programming prototype for non-expert and expert users. The last two present how program statements are extracted from unstructured natural language descriptions

    Robot manipulator skill learning and generalising through teleoperation

    Get PDF
    Robot manipulators have been widely used for simple repetitive, and accurate tasks in industrial plants, such as pick and place, assembly and welding etc., but it is still hard to deploy in human-centred environments for dexterous manipulation tasks, such as medical examination and robot-assisted healthcare. These tasks are not only related to motion planning and control but also to the compliant interaction behaviour of robots, e.g. motion control, force regulation and impedance adaptation simultaneously under dynamic and unknown environments. Recently, with the development of collaborative robotics (cobots) and machine learning, robot skill learning and generalising have attained increasing attention from robotics, machine learning and neuroscience communities. Nevertheless, learning complex and compliant manipulation skills, such as manipulating deformable objects, scanning the human body and folding clothes, is still challenging for robots. On the other hand, teleoperation, also namely remote operation or telerobotics, has been an old research area since 1950, and there have been a number of applications such as space exploration, telemedicine, marine vehicles and emergency response etc. One of its advantages is to combine the precise control of robots with human intelligence to perform dexterous and safety-critical tasks from a distance. In addition, telepresence allows remote operators could feel the actual interaction between the robot and the environment, including the vision, sound and haptic feedback etc. Especially under the development of various augmented reality (AR), virtual reality (VR) and wearable devices, intuitive and immersive teleoperation have received increasing attention from robotics and computer science communities. Thus, various human-robot collaboration (HRC) interfaces based on the above technologies were developed to integrate robot control and telemanipulation by human operators for robot skills learning from human beings. In this context, robot skill learning could benefit teleoperation by automating repetitive and tedious tasks, and teleoperation demonstration and interaction by human teachers also allow the robot to learn progressively and interactively. Therefore, in this dissertation, we study human-robot skill transfer and generalising through intuitive teleoperation interfaces for contact-rich manipulation tasks, including medical examination, manipulating deformable objects, grasping soft objects and composite layup in manufacturing. The introduction, motivation and objectives of this thesis are introduced in Chapter 1. In Chapter 2, a literature review on manipulation skills acquisition through teleoperation is carried out, and the motivation and objectives of this thesis are discussed subsequently. Overall, the main contents of this thesis have three parts: Part 1 (Chapter 3) introduces the development and controller design of teleoperation systems with multimodal feedback, which is the foundation of this project for robot learning from human demonstration and interaction. In Part 2 (Chapters 4, 5, 6 and 7), we studied primitive skill library theory, behaviour tree-based modular method, and perception-enhanced method to improve the generalisation capability of learning from the human demonstrations. And several applications were employed to evaluate the effectiveness of these methods.In Part 3 (Chapter 8), we studied the deep multimodal neural networks to encode the manipulation skill, especially the multimodal perception information. This part conducted physical experiments on robot-assisted ultrasound scanning applications.Chapter 9 summarises the contributions and potential directions of this thesis. Keywords: Learning from demonstration; Teleoperation; Multimodal interface; Human-in-the-loop; Compliant control; Human-robot interaction; Robot-assisted sonography

    Robot Learning from Demonstration in Robotic Assembly: A Survey

    Get PDF
    Learning from demonstration (LfD) has been used to help robots to implement manipulation tasks autonomously, in particular, to learn manipulation behaviors from observing the motion executed by human demonstrators. This paper reviews recent research and development in the field of LfD. The main focus is placed on how to demonstrate the example behaviors to the robot in assembly operations, and how to extract the manipulation features for robot learning and generating imitative behaviors. Diverse metrics are analyzed to evaluate the performance of robot imitation learning. Specifically, the application of LfD in robotic assembly is a focal point in this paper

    \u3cem\u3eGRASP News\u3c/em\u3e, Volume 8, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory. Edited by Thomas Lindsay

    A Framework for Coordinated Control of Multi-Agent Systems

    Get PDF
    Multi-agent systems represent a group of agents that cooperate to solve common tasks in a dynamic environment. Multi-agent control systems have been widely studied in the past few years. The control of multi-agent systems relates to synthesizing control schemes for systems which are inherently distributed and composed of multiple interacting entities. Because of the wide applications of multi-agent theories in large and complex control systems, it is necessary to develop a framework to simplify the process of developing control schemes for multi-agent systems. In this study, a framework is proposed for the distributed control and coordination of multi-agent systems. In the proposed framework, the control of multi-agent systems is regarded as achieving decentralized control and coordination of agents. Each agent is modeled as a Coordinated Hybrid Agent (CHA) which is composed of an intelligent coordination layer and a hybrid control layer. The intelligent coordination layer takes the coordination input, plant input and workspace input. After processing the coordination primitives, the intelligent coordination layer outputs the desired action to the hybrid layer. In the proposed framework, we describe the coordination mechanism in a domain-independent way, as simple abstract primitives in a coordination rule base for certain dependency relationships between the activities of different agents. The intelligent coordination layer deals with the planning, coordination, decision-making and computation of the agent. The hybrid control layer of the proposed framework takes the output of the intelligent coordination layer and generates discrete and continuous control signals to control the overall process. In order to verify the feasibility of the proposed framework, experiments for both heterogeneous and homogeneous Multi-Agent Systems (MASs) are implemented. In addition, the stability of systems modeled using the proposed framework is also analyzed. The conditions for asymptotic stability and exponential stability of a CHA system are given. In order to optimize a Multi-Agent System (MAS), a hybrid approach is proposed to address the optimization problem for a MAS modeled using the CHA framework. Both the event-driven dynamics and time-driven dynamics are included for the formulation of the optimization problem. A generic formula is given for the optimization of the framework. A direct identification algorithm is also discussed to solve the optimization problem

    Emergent coordination between humans and robots

    Get PDF
    Emergent coordination or movement synchronization is an often observed phenomenon in human behavior. Humans synchronize their gait when walking next to each other, they synchronize their postural sway when standing closely, and they also synchronize their movement behavior in many other situations of daily life. Why humans are doing this is an important question of ongoing research in many disciplines: apparently movement synchronization plays a role in children’s development and learning; it is related to our social and emotional behavior in interaction with others; it is an underlying principle in the organization of communication by means of language and gesture; and finally, models explaining movement synchronization between two individuals can also be extended to group behavior. Overall, one can say that movement synchronization is an important principle of human interaction behavior. Besides interacting with other humans, in recent years humans do more and more interact with technology. This was first expressed in the interaction with machines in industrial settings, was taken further to human-computer interaction and is now facing a new challenge: the interaction with active and autonomous machines, the interaction with robots. If the vision of today’s robot developers comes true, in the near future robots will be fully integrated not only in our workplace, but also in our private lives. They are supposed to support humans in activities of daily living and even care for them. These circumstances however require the development of interactional principles which the robot can apply to the direct interaction with humans. In this dissertation the problem of robots entering the human society will be outlined and the need for the exploration of human interaction principles that are transferable to human-robot interaction will be emphasized. Furthermore, an overview on human movement synchronization as a very important phenomenon in human interaction will be given, ranging from neural correlates to social behavior. The argument of this dissertation is that human movement synchronization is a simple but striking human interaction principle that can be applied in human-robot interaction to support human activity of daily living, demonstrated on the example of pick-and-place tasks. This argument is based on five publications. In the first publication, human movement synchronization is explored in goal-directed tasks which bare similar requirements as pick-and-place tasks in activities of daily living. In order to explore if a merely repetitive action of the robot is sufficient to encourage human movement synchronization, the second publication reports a human-robot interaction study in which a human interacts with a non-adaptive robot. Here however, movement synchronization between human and robot does not emerge, which underlines the need for adaptive mechanisms. Therefore, in the third publication, human adaptive behavior in goal-directed movement synchronization is explored. In order to make the findings from the previous studies applicable to human-robot interaction, in the fourth publication the development of an interaction model based on dynamical systems theory is outlined which is ready for implementation on a robotic platform. Following this, a brief overview on a first human-robot interaction study based on the developed interaction model is provided. The last publication describes an extension of the previous approach which also includes the human tendency to make use of events to adapt their movements to. Here, also a first human-robot interaction study is reported which confirms the applicability of the model. The dissertation concludes with a discussion on the presented findings in the light of human-robot interaction and psychological aspects of joint action research as well as the problem of mutual adaptation.Spontan auftretende Koordination oder Bewegungssynchronisierung ist ein häufig zu beobachtendes Phänomen im Verhalten von Menschen. Menschen synchronisieren ihre Schritte beim nebeneinander hergehen, sie synchronisieren die Schwingbewegung zum Ausgleich der Körperbalance wenn sie nahe beieinander stehen und sie synchronisieren ihr Bewegungsverhalten generell in vielen weiteren Handlungen des täglichen Lebens. Die Frage nach dem warum ist eine Frage mit der sich die Forschung in der Psychologie, Neuro- und Bewegungswissenschaft aber auch in der Sozialwissenschaft nach wie vor beschäftigt: offenbar spielt die Bewegungssynchronisierung eine Rolle in der kindlichen Entwicklung und beim Erlernen von Fähigkeiten und Verhaltensmustern; sie steht in direktem Bezug zu unserem sozialen Verhalten und unserer emotionalen Wahrnehmung in der Interaktion mit Anderen; sie ist ein grundlegendes Prinzip in der Organisation von Kommunikation durch Sprache oder Gesten; außerdem können Modelle, die Bewegungssynchronisierung zwischen zwei Individuen erklären, auch auf das Verhalten innerhalb von Gruppen ausgedehnt werden. Insgesamt kann man also sagen, dass Bewegungssynchronisierung ein wichtiges Prinzip im menschlichen Interaktionsverhalten darstellt. Neben der Interaktion mit anderen Menschen interagieren wir in den letzten Jahren auch zunehmend mit der uns umgebenden Technik. Hier fand zunächst die Interaktion mit Maschinen im industriellen Umfeld Beachtung, später die Mensch-Computer-Interaktion. Seit kurzem sind wir jedoch mit einer neuen Herausforderung konfrontiert: der Interaktion mit aktiven und autonomen Maschinen, Maschinen die sich bewegen und aktiv mit Menschen interagieren, mit Robotern. Sollte die Vision der heutigen Roboterentwickler Wirklichkeit werde, so werden Roboter in der nahen Zukunft nicht nur voll in unser Arbeitsumfeld integriert sein, sondern auch in unser privates Leben. Roboter sollen den Menschen in ihren täglichen Aktivitäten unterstützen und sich sogar um sie kümmern. Diese Umstände erfordern die Entwicklung von neuen Interaktionsprinzipien, welche Roboter in der direkten Koordination mit dem Menschen anwenden können. In dieser Dissertation wird zunächst das Problem umrissen, welches sich daraus ergibt, dass Roboter zunehmend Einzug in die menschliche Gesellschaft finden. Außerdem wird die Notwendigkeit der Untersuchung menschlicher Interaktionsprinzipien, die auf die Mensch-Roboter-Interaktion transferierbar sind, hervorgehoben. Die Argumentation der Dissertation ist, dass die menschliche Bewegungssynchronisierung ein einfaches aber bemerkenswertes menschliches Interaktionsprinzip ist, welches in der Mensch-Roboter-Interaktion angewendet werden kann um menschliche Aktivitäten des täglichen Lebens, z.B. Aufnahme-und-Ablege-Aufgaben (pick-and-place tasks), zu unterstützen. Diese Argumentation wird auf fünf Publikationen gestützt. In der ersten Publikation wird die menschliche Bewegungssynchronisierung in einer zielgerichteten Aufgabe untersucht, welche die gleichen Anforderungen erfüllt wie die Aufnahme- und Ablageaufgaben des täglichen Lebens. Um zu untersuchen ob eine rein repetitive Bewegung des Roboters ausreichend ist um den Menschen zur Etablierung von Bewegungssynchronisierung zu ermutigen, wird in der zweiten Publikation eine Mensch-Roboter-Interaktionsstudie vorgestellt in welcher ein Mensch mit einem nicht-adaptiven Roboter interagiert. In dieser Studie wird jedoch keine Bewegungssynchronisierung zwischen Mensch und Roboter etabliert, was die Notwendigkeit von adaptiven Mechanismen unterstreicht. Daher wird in der dritten Publikation menschliches Adaptationsverhalten in der Bewegungssynchronisierung in zielgerichteten Aufgaben untersucht. Um die so gefundenen Mechanismen für die Mensch-Roboter Interaktion nutzbar zu machen, wird in der vierten Publikation die Entwicklung eines Interaktionsmodells basierend auf Dynamischer Systemtheorie behandelt. Dieses Modell kann direkt in eine Roboterplattform implementiert werden. Anschließend wird kurz auf eine erste Studie zur Mensch- Roboter Interaktion basierend auf dem entwickelten Modell eingegangen. Die letzte Publikation beschreibt eine Weiterentwicklung des bisherigen Vorgehens welche der Tendenz im menschlichen Verhalten Rechnung trägt, die Bewegungen an Ereignissen auszurichten. Hier wird außerdem eine erste Mensch-Roboter- Interaktionsstudie vorgestellt, die die Anwendbarkeit des Modells bestätigt. Die Dissertation wird mit einer Diskussion der präsentierten Ergebnisse im Kontext der Mensch-Roboter-Interaktion und psychologischer Aspekte der Interaktionsforschung sowie der Problematik von beiderseitiger Adaptivität abgeschlossen
    • …
    corecore