13 research outputs found

    A robot learning from demonstration framework to perform force-based manipulation tasks

    Get PDF
    This paper proposes an end-to-end learning from demonstration framework for teaching force-based manipulation tasks to robots. The strengths of this work are manyfold. First, we deal with the problem of learning through force perceptions exclusively. Second, we propose to exploit haptic feedback both as a means for improving teacher demonstrations and as a human–robot interaction tool, establishing a bidirectional communication channel between the teacher and the robot, in contrast to the works using kinesthetic teaching. Third, we address the well-known what to imitate? problem from a different point of view, based on the mutual information between perceptions and actions. Lastly, the teacher’s demonstrations are encoded using a Hidden Markov Model, and the robot execution phase is developed by implementing a modified version of Gaussian Mixture Regression that uses implicit temporal information from the probabilistic model, needed when tackling tasks with ambiguous perceptions. Experimental results show that the robot is able to learn and reproduce two different manipulation tasks, with a performance comparable to the teacher’s one.Peer ReviewedPostprint (author’s final draft post-refereeing

    Robot manipulation in human environments: Challenges for learning algorithms

    Get PDF
    Resumen del trabajo presentado al Dagstuhl Seminar 2014 celebrado en Dagstuhl (Alemania) del 17 al 21 de febrero de 2014.The European projects PACO-PLUS, GARNICS and IntellAct, the Spanish projects PAU and PAU+, and the Catalan grant SGR-155.Peer Reviewe

    From the Turing test to science fiction: the challenges of social robotics

    Get PDF
    The Turing test (1950) sought to distinguish whether a speaker engaged in a computer talk was a human or a machine [6]. Science fiction has immortalized several humanoid robots full of humanity, and it is nowadays speculating about the role the human being and the machine may play in this “pas à deux” in which we are irremissibly engaged [12]. Where is current robotics research heading to? Industrial robots are giving way to social robots designed to aid in healthcare, education, entertainment and services. In the near future, robots will assist disabled and elderly people, do chores, act as playmates for youngsters and adults, and even work as nannies and reinforcement teachers. This poses new requirements to robotics research, since social robots must be easy to program by non-experts [10], intrinsically safe [3], able to perceive and manipulate deformable objects [2, 8], tolerant to inaccurate perceptions and actions [4, 7] and, above all, they must be endowed with a strong learning capacity [1, 9] and a high adaptability [14] to non-predefined and dynamic environments. Taking as an example projects developed at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC), some of the scientific, technological and ethical challenges [5, 11, 13] that this robotic evolution entails will be showcased.Peer ReviewedPostprint (author’s final draft

    Software tools for the cognitive development of autonomous robots

    Get PDF
    Robotic systems are evolving towards higher degrees of autonomy. This paper reviews the cognitive tools available nowadays for the fulfilment of abstract or long-term goals as well as for learning and modifying their behaviour.Peer ReviewedPostprint (author's final draft

    Confluència de ciència i ficció en la robòtica actual

    Get PDF
    Els robots industrials i els hominoides de la ciència-ficció, tan diferents fins ara, comencen a confluir gràcies al ràpid desenvolupament de la robòtica assistencial i de serveis. S’estan dissenyant robots que puguin interaccionar amb les persones, ja sigui atenent discapacitats i gent gran, fent de recepcionistes o dependents en centres comercials, o actuant de mestres de reforç o de mainaderes. Aquests anomenats robots socials plantegen un ventall de qüestions ètiques més ampli que els seus predecessors industrials. En aquest context, la comunitat científica robòtica s’ha acostat a les humanitats i, en particular, s’ha interessat pels dilemes morals sovint plantejats en obres de ciència-ficció. Després d’un breu repàs a l’estat actual de la recerca en robòtica, s’esbossaran les dificultats metodològiques de preveure científicament l’evolució d’una societat tecnificada, i s’apuntaran algunes novel·les i pel·lícules que tracten el tema de la interacció amb robots i la seva possible influència en el pensament, les relacions i els sentiments humans.Peer ReviewedPostprint (author's final draft

    Service robots for citizens of the future

    Get PDF
    Robots are no longer confined to factories; they are progressively spreading to urban, social and assistive domains. In order to become handy co-workers and helpful assistants, they must be endowed with quite different abilities from their industrial ancestors. Research on service robots aims to make them intrinsically safe to people, easy to teach by non-experts, able to manipulate not only rigid but also deformable objects, and highly adaptable to non-predefined and dynamic environments. Robots worldwide will share object and environmental models, their acquired knowledge and experiences through global databases and, together with the internet of things, will strongly change the citizens’ way of life in so-called smart cities. This raises a number of social and ethical issues that are now being debated not only within the Robotics community but by society at large.Peer ReviewedPostprint (author's final draft

    Symbolic-based recognition of contact states for learning assembly skills

    Get PDF
    Imitation learning is gaining more attention because it enables robots to learn skills from human demonstrations. One of the major industrial activities that can benefit from imitation learning is the learning of new assembly processes. An essential characteristic of an assembly skill is its different contact states (CS). They determine how to adjust movements in order to perform the assembly task successfully. Humans can recognise CSs through haptic feedback. They execute complex assembly tasks accordingly. Hence, CSs are generally recognised using force and torque information. This process is not straightforward due to the variations in assembly tasks, signal noise and ambiguity in interpreting force/torque (F/T) information. In this research, an investigation has been conducted to recognise the CSs during an assembly process with a geometrical variation on the mating parts. The F/T data collected from several human trials were pre-processed, segmented and represented as symbols. Those symbols were used to train a probabilistic model. Then, the trained model was validated using unseen datasets. The primary goal of the proposed approach aims to improve recognition accuracy and reduce the computational effort by employing symbolic and probabilistic approaches. The model successfully recognised CS based only on force information. This shows that such models can assist in imitation learning.</div

    Robot Learning From Human Observation Using Deep Neural Networks

    Get PDF
    Industrial robots have gained traction in the last twenty years and have become an integral component in any sector empowering automation. Specifically, the automotive industry implements a wide range of industrial robots in a multitude of assembly lines worldwide. These robots perform tasks with the utmost level of repeatability and incomparable speed. It is that speed and consistency that has always made the robotic task an upgrade over the same task completed by a human. The cost savings is a great return on investment causing corporations to automate and deploy robotic solutions wherever feasible. The cost to commission and set up is the largest deterring factor in any decision regarding robotics and automation. Currently, robots are traditionally programmed by robotic technicians, and this function is carried out in a manual process in a well-structured environment. This thesis dives into the option of eliminating the programming and commissioning portion of the robotic integration. If the environment is dynamic and can undergo various iterations of parts, changes in lighting, and part placement in the cell, then the robot will struggle to function because it is not capable of adapting to these variables. If a couple of cameras can be introduced to help capture the operator’s motions and part variability, then Learning from Demonstration (LfD) can be implemented to potentially solve this prevalent issue in today’s automotive culture. With assistance from machine learning algorithms, deep neural networks, and transfer learning technology, LfD can strive and become a viable solution. This system was developed with a robotic cell that can learn from demonstration (LfD). The proposed approach is based on computer vision to observe human actions and deep learning to perceive the demonstrator’s actions and manipulated objects

    Deep learning based approaches for imitation learning.

    Get PDF
    Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable
    corecore