99 research outputs found

    AR-Enhanced Human-Robot-Interaction - Methodologies, Algorithms, Tools

    Get PDF
    By using Augmented Reality in Human-Robot-Interaction scenariospropose it is possible to improve training, programming, maintenance and process monitoring. AR Enhanced Human Robot Interaction means it is possible to conduct activities not only in a training facility with physical robot(s) but also in a complete virtual environment. By using virtual environments only a computer and possibly Head Mounting Display is required. This will reduce the bottlenecks for with overbooked physical training facilities. Physical environment for the activities with robot(s) will still be required, however using also virtual environments will increase flexibility and human operator can focus on training more complicated tasks. (C) 2016 The Authors. Published by Elsevier B.V.Partially funded by FP7 EU project LIAA (http://www.project- leanautomation.eu/

    Survey: Robot Programming by Demonstration

    Get PDF
    Robot PbD started about 30 years ago, growing importantly during the past decade. The rationale for moving from purely preprogrammed robots to very flexible user-based interfaces for training the robot to perform a task is three-fold. First and foremost, PbD, also referred to as {\em imitation learning} is a powerful mechanism for reducing the complexity of search spaces for learning. When observing either good or bad examples, one can reduce the search for a possible solution, by either starting the search from the observed good solution (local optima), or conversely, by eliminating from the search space what is known as a bad solution. Imitation learning is, thus, a powerful tool for enhancing and accelerating learning in both animals and artifacts. Second, imitation learning offers an implicit means of training a machine, such that explicit and tedious programming of a task by a human user can be minimized or eliminated (Figure \ref{fig:what-how}). Imitation learning is thus a ``natural'' means of interacting with a machine that would be accessible to lay people. And third, studying and modeling the coupling of perception and action, which is at the core of imitation learning, helps us to understand the mechanisms by which the self-organization of perception and action could arise during development. The reciprocal interaction of perception and action could explain how competence in motor control can be grounded in rich structure of perceptual variables, and vice versa, how the processes of perception can develop as means to create successful actions. PbD promises were thus multiple. On the one hand, one hoped that it would make the learning faster, in contrast to tedious reinforcement learning methods or trials-and-error learning. On the other hand, one expected that the methods, being user-friendly, would enhance the application of robots in human daily environments. Recent progresses in the field, which we review in this chapter, show that the field has make a leap forward the past decade toward these goals and that these promises may be fulfilled very soon

    Imitation

    Get PDF

    Robot skill learning through human demonstration and interaction

    Get PDF
    Nowadays robots are increasingly involved in more complex and less structured tasks. Therefore, it is highly desirable to develop new approaches to fast robot skill acquisition. This research is aimed to develop an overall framework for robot skill learning through human demonstration and interaction. Through low-level demonstration and interaction with humans, the robot can learn basic skills. These basic skills are treated as primitive actions. In high-level learning, the complex skills demonstrated by the human can be automatically translated into skill scripts which are executed by the robot. This dissertation summarizes my major research activities in robot skill learning. First, a framework for Programming by Demonstration (PbD) with reinforcement learning for human-robot collaborative manipulation tasks is described. With this framework, the robot can learn low level skills such as collaborating with a human to lift a table successfully and efficiently. Second, to develop a high-level skill acquisition system, we explore the use of a 3D sensor to recognize human actions. A Kinect based action recognition system is implemented which considers both object/action dependencies and the sequential constraints. Third, we extend the action recognition framework by fusing information from multimodal sensors which can recognize fine assembly actions. Fourth, a Portable Assembly Demonstration (PAD) system is built which can automatically generate skill scripts from human demonstration. The skill script includes the object type, the tool, the action used, and the assembly state. Finally, the generated skill scripts are implemented by a dual-arm robot. The proposed framework was experimentally evaluated

    Nonprehensile Manipulation via Multisensory Learning from Demonstration

    Get PDF
    Dexterous manipulation problem concerns control of a robot hand to manipulate an object in a desired manner. While classical dexterous manipulation strategies are based on stable grasping (or force closure), many human-like manipulation tasks do not maintain grasp stability, and often utilize the intrinsic dynamics of the object rather than the closed form of kinematic relation between the object and the robotic fingers. Such manipulation strategies are referred as nonprehensile or dynamic dexterous manipulation in the literature. Nonprehensile manipulation typically involves fast and agile movements such as throwing and flipping. Due to the complexity of such motions (which may involve impulsive dynamics) and uncertainties associated with them, it has been challenging to realize nonprehensile manipulation tasks in a reliable way. In this paper, we propose a new control strategy to realize practical nonprehensile manipulation tasks using a robot hand. The main idea of our control strategy are two-folds. Firstly, we make explicit use of multiple modalities of sensory data for the design of control law. Specifically, force data is employed for feedforward control while the position data is used for feedback (i.e. reactive) control. Secondly, control signals (both feedback and feedforward) are obtained by the multisensory learning from demonstration (LfD) experiments which are designed and performed for specific nonprehensile manipulation tasks in concern. We utilize various LfD frameworks such as Gaussian mixture model and Gaussian mixture regression (GMM/GMR) and hidden Markov model and GMR (HMM/GMR) to reproduce generalized motion profiles from the human expert's demonstrations. The proposed control strategy has been verified by experimental results on dynamic spinning task using a sensory-rich two-finger robotic hand. The control performance (i.e. the speed and accuracy of the spinning task) has also been compared with that of the classical dexterous manipulation based on finger gating

    Metrics to Evaluate Human Teaching Engagement From a Robot's Point of View

    Get PDF
    This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robot’s point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot’s perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a “good” teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacher’s activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement

    Recolha e conceptualização de experiências de atividades robóticas baseadas em planos para melhoria de competências no longo prazo

    Get PDF
    Robot learning is a prominent research direction in intelligent robotics. Robotics involves dealing with the issue of integration of multiple technologies, such as sensing, planning, acting, and learning. In robot learning, the long term goal is to develop robots that learn to perform tasks and continuously improve their knowledge and skills through observation and exploration of the environment and interaction with users. While significant research has been performed in the area of learning motor behavior primitives, the topic of learning high-level representations of activities and classes of activities that, decompose into sequences of actions, has not been sufficiently addressed. Learning at the task level is key to increase the robots’ autonomy and flexibility. High-level task knowledge is essential for intelligent robotics since it makes robot programs less dependent on the platform and eases knowledge exchange between robots with different kinematics. The goal of this thesis is to contribute to the development of cognitive robotic capabilities, including supervised experience acquisition through human-robot interaction, high-level task learning from the acquired experiences, and task planning using the acquired task knowledge. A framework containing the required cognitive functions for learning and reproduction of high-level aspects of experiences is proposed. In particular, we propose and formalize the notion of Experience-Based Planning Domains (EBPDs) for long-term learning and planning. A human-robot interaction interface is used to provide a robot with step-by-step instructions on how to perform tasks. Approaches to recording plan-based robot activity experiences including relevant perceptions of the environment and actions taken by the robot are presented. A conceptualization methodology is presented for acquiring task knowledge in the form of activity schemata from experiences. The conceptualization approach is a combination of different techniques including deductive generalization, different forms of abstraction and feature extraction. Conceptualization includes loop detection, scope inference and goal inference. Problem solving in EBPDs is achieved using a two-layer problem solver comprising an abstract planner, to derive an abstract solution for a given task problem by applying a learned activity schema, and a concrete planner, to refine the abstract solution towards a concrete solution. The architecture and the learning and planning methods are applied and evaluated in several real and simulated world scenarios. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.Aprendizagem de robôs é uma direção de pesquisa proeminente em robótica inteligente. Em robótica, é necessário lidar com a questão da integração de várias tecnologias, como percepção, planeamento, atuação e aprendizagem. Na aprendizagem de robôs, o objetivo a longo prazo é desenvolver robôs que aprendem a executar tarefas e melhoram continuamente os seus conhecimentos e habilidades através da observação e exploração do ambiente e interação com os utilizadores. A investigação tem-se centrado na aprendizagem de comportamentos básicos, ao passo que a aprendizagem de representações de atividades de alto nível, que se decompõem em sequências de ações, e de classes de actividades, não tem sido suficientemente abordada. A aprendizagem ao nível da tarefa é fundamental para aumentar a autonomia e a flexibilidade dos robôs. O conhecimento de alto nível permite tornar o software dos robôs menos dependente da plataforma e facilita a troca de conhecimento entre robôs diferentes. O objetivo desta tese é contribuir para o desenvolvimento de capacidades cognitivas para robôs, incluindo aquisição supervisionada de experiência através da interação humano-robô, aprendizagem de tarefas de alto nível com base nas experiências acumuladas e planeamento de tarefas usando o conhecimento adquirido. Propõe-se uma abordagem que integra diversas funcionalidades cognitivas para aprendizagem e reprodução de aspetos de alto nível detetados nas experiências acumuladas. Em particular, nós propomos e formalizamos a noção de Domínio de Planeamento Baseado na Experiência (Experience-Based Planning Domain, or EBPD) para aprendizagem e planeamento num âmbito temporal alargado. Uma interface para interação humano-robô é usada para fornecer ao robô instruções passo-a-passo sobre como realizar tarefas. Propõe-se uma abordagem para extrair experiências de atividades baseadas em planos, incluindo as percepções relevantes e as ações executadas pelo robô. Uma metodologia de conceitualização é apresentada para a aquisição de conhecimento de tarefa na forma de schemata a partir de experiências. São utilizadas diferentes técnicas, incluindo generalização dedutiva, diferentes formas de abstracção e extração de características. A metodologia inclui detecção de ciclos, inferência de âmbito de aplicação e inferência de objetivos. A resolução de problemas em EBPDs é alcançada usando um sistema de planeamento com duas camadas, uma para planeamento abstrato, aplicando um schema aprendido, e outra para planeamento detalhado. A arquitetura e os métodos de aprendizagem e planeamento são aplicados e avaliados em vários cenários reais e simulados. Finalmente, os métodos de aprendizagem desenvolvidos são comparados e as condições onde cada um deles tem melhor aplicabilidade são discutidos.Programa Doutoral em Informátic

    Machine learning for improving heuristic optimisation

    Get PDF
    Heuristics, metaheuristics and hyper-heuristics are search methodologies which have been preferred by many researchers and practitioners for solving computationally hard combinatorial optimisation problems, whenever the exact methods fail to produce high quality solutions in a reasonable amount of time. In this thesis, we introduce an advanced machine learning technique, namely, tensor analysis, into the field of heuristic optimisation. We show how the relevant data should be collected in tensorial form, analysed and used during the search process. Four case studies are presented to illustrate the capability of single and multi-episode tensor analysis processing data with high and low abstraction levels for improving heuristic optimisation. A single episode tensor analysis using data at a high abstraction level is employed to improve an iterated multi-stage hyper-heuristic for cross-domain heuristic search. The empirical results across six different problem domains from a hyper-heuristic benchmark show that significant overall performance improvement is possible. A similar approach embedding a multi-episode tensor analysis is applied to the nurse rostering problem and evaluated on a benchmark of a diverse collection of instances, obtained from different hospitals across the world. The empirical results indicate the success of the tensor-based hyper-heuristic, improving upon the best-known solutions for four particular instances. Genetic algorithm is a nature inspired metaheuristic which uses a population of multiple interacting solutions during the search. Mutation is the key variation operator in a genetic algorithm and adjusts the diversity in a population throughout the evolutionary process. Often, a fixed mutation probability is used to perturb the value at each locus, representing a unique component of a given solution. A single episode tensor analysis using data with a low abstraction level is applied to an online bin packing problem, generating locus dependent mutation probabilities. The tensor approach improves the performance of a standard genetic algorithm on almost all instances, significantly. A multi-episode tensor analysis using data with a low abstraction level is embedded into multi-agent cooperative search approach. The empirical results once again show the success of the proposed approach on a benchmark of flow shop problem instances as compared to the approach which does not make use of tensor analysis. The tensor analysis can handle the data with different levels of abstraction leading to a learning approach which can be used within different types of heuristic optimisation methods based on different underlying design philosophies, indeed improving their overall performance

    A Biosymtic (Biosymbiotic Robotic) Approach to Human Development and Evolution. The Echo of the Universe.

    Get PDF
    In the present work we demonstrate that the current Child-Computer Interaction paradigm is not potentiating human development to its fullest – it is associated with several physical and mental health problems and appears not to be maximizing children’s cognitive performance and cognitive development. In order to potentiate children’s physical and mental health (including cognitive performance and cognitive development) we have developed a new approach to human development and evolution. This approach proposes a particular synergy between the developing human body, computing machines and natural environments. It emphasizes that children should be encouraged to interact with challenging physical environments offering multiple possibilities for sensory stimulation and increasing physical and mental stress to the organism. We created and tested a new set of computing devices in order to operationalize our approach – Biosymtic (Biosymbiotic Robotic) devices: “Albert” and “Cratus”. In two initial studies we were able to observe that the main goal of our approach is being achieved. We observed that, interaction with the Biosymtic device “Albert”, in a natural environment, managed to trigger a different neurophysiological response (increases in sustained attention levels) and tended to optimize episodic memory performance in children, compared to interaction with a sedentary screen-based computing device, in an artificially controlled environment (indoors) - thus a promising solution to promote cognitive performance/development; and that interaction with the Biosymtic device “Cratus”, in a natural environment, instilled vigorous physical activity levels in children - thus a promising solution to promote physical and mental health
    • …
    corecore