207 research outputs found

    Incremental Learning of Object Models From Natural Human-Robot Interactions

    Get PDF
    In order to perform complex tasks in realistic human environments, robots need to be able to learn new concepts in the wild, incrementally, and through their interactions with humans. This article presents an end-to-end pipeline to learn object models incrementally during the human-robot interaction (HRI). The pipeline we propose consists of three parts: 1) recognizing the interaction type; 2) detecting the object that the interaction is targeting; and 3) learning incrementally the models from data recorded by the robot sensors. Our main contributions lie in the target object detection, guided by the recognized interaction, and in the incremental object learning. The novelty of our approach is the focus on natural, heterogeneous, and multimodal HRIs to incrementally learn new object models. Throughout the article, we highlight the main challenges associated with this problem, such as high degree of occlusion and clutter, domain change, low-resolution data, and interaction ambiguity. This article shows the benefits of using multiview approaches and combining visual and language features, and our experimental results outperform standard baselines

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    Learning from human-robot interaction

    Get PDF
    En los últimos años cada vez es más frecuente ver robots en los hogares. La robótica está cada vez más presente en muchos aspectos de nuestras vidas diarias, en aparatos de asistencia doméstica, coches autónomos o asistentes personales. La interacción entre estos robots asistentes y los usuarios es uno de los aspectos clave en la robótica de servicio. Esta interacción necesita ser cómoda e intuitiva para que sea efectiva su utilización. Estas interacciones con los usuarios son necesarias para que el robot aprenda y actualice de manera natural tanto su modelo del mundo como sus capacidades. Dentro de los sistemas roboticos de servicio, hay muchos componentes que son necesarios para su buen funcionamiento. Esta tesis esta centrada en el sistema de percepción visual de dichos sistemas.Para los humanos la percepción visual es uno de los componentes más esenciales, permitiendo tareas como reconocimiento de objetos u otras personas, o estimación de información 3D. Los grandes logros obtenidos en los últimos años en tareas de reconocimiento automático utilizan los enfoques basados en aprendizaje automático, en particular técnicas de deep learning. La mayoría de estos trabajos actuales se centran en modelos entrenados 'a priori' en un conjunto de datos muy grandes. Sin embargo, estos modelos, aunque entrenados en una gran cantidad de datos, no pueden, en general, hacer frente a los retos que aparecen al tratar con datos reales en entornos domésticos. Por ejemplo, es frecuente que se de el caso de tener nuevos objetos que no existían durante el entrenamiento de los modelos. Otro reto viene de la dispersión de los objetos, teniendo objetos que aparecen muy raramente y por lo tanto habia muy pocos, o ningún, ejemplos en los datos de entenamiento disponibles al crear el modelo.Esta tesis se ha desarrollado dentro del contexto del proyecto IGLU (Interactive Grounded Language Understanding). Dentro del proyecto y sus objetivos, el objetivo principal de esta Tesis doctoral es investigar métodos novedosos para que un robot aprenda de manera incremental mediante la interacción multimodal con el usuario.Desarrollando dicho objetivo principal, los principales trabajos desarrollados durante esta tesis han sido:-Crear un benchmark más adecuado para las tareas de aprendizaje mediante la interacción natural de usuario y robot. Por ejemplo, la mayoría de los datasets para la tarea de reconocimiento de objetos se centra en fotos de diferentes escenarios con múltiples clases por foto. Es necesario un dataset que combine interacción usuario robot con aprendizaje de objetos.-Mejorar sistemas existentes de aprendizaje de objetos y adecuarlos para aprendizaje desde la interacción multimodal humana. Los trabajos de detección de objetos se focalizan en detectar todos los objetos aprendidos en una imagen. Nuestro objetivo es usar la interacción para encontrar el objeto de referencia y aprenderlo incrementalmente.-Desarrollar métodos de aprendizaje incremental que se puedan utilizar en escenarios incrementales, p.e., la aparición de una nueva clase de objeto o cambios a lo largo del tiempo dentro de una clase objetos. Nuestro objetivo es diseñar un sistema que pueda aprender clases desde cero y que pueda actualizar los datos cuando estos aparecen.-Crear un completo prototipo para el aprendizaje incremental y multimodal usando la interacción humana-robot. Se necesita realizar la integración de los distintos métodos desarrollados como parte de los otros objetivos y evaluarlo.<br /

    Automatic extraction of constraints in manipulation tasks for autonomy and interaction

    Get PDF
    Tasks routinely executed by humans involve sequences of actions performed with high dexterity and coordination. Fully specifying these actions such that a robot could replicate the task is often difficult. Furthermore the uncertainties introduced by the use of different tools or changing configurations demand the specification to be generic, while enhancing the important task aspects, i.e. the constraints. Therefore the first challenge of this thesis is inferring these constraints from repeated demonstrations. In addition humans explaining a task to another person rely on the person's ability to apprehend missing or implicit information. Therefore observations contain user-specific cues, alongside knowledge on performing the task. Thus our second challenge is correlating the task constraints with the user behavior for improving the robot's performance. We address these challenges using a Programming by Demonstration framework. In the first part of the thesis we describe an approach for decomposing demonstrations into actions and extracting task-space constraints as continuous features that apply throughout each action. The constraints consist of: (1) the reference frame for performing manipulation, (2) the variables of interest relative to this frame, allowing a decomposition in force and position control, and (3) a stiffness gain modulating the contribution of force and position. We then extend this approach to asymmetrical bimanual tasks by extracting features that enable arm coordination: the master--slave role that enables precedence, and the motion--motion or force--motion coordination that facilitates the physical interaction through an object. The set of constraints and the time-independent encoding of each action form a task prototype, used to execute the task. In the second part of the thesis we focus on discovering additional features implicit in the demonstrations with respect to two aspects of the teaching interactions: (1) characterizing the user performance and (2) improving the user behavior. For the first goal we assess the skill of the user and implicitly the quality of the demonstrations by using objective task--specific metrics, related directly to the constraints. We further analyze ways of making the user aware of the robot's state during teaching by providing task--related feedback. The feedback has a direct influence on both the teaching efficiency and the user's perception of the interaction. We evaluated our approaches on robotic experiments that encompass daily activities using two 7 degrees of freedom Kuka LWR robotic arms, and a 53 degrees of freedom iCub humanoid robot

    Emergent coordination between humans and robots

    Get PDF
    Emergent coordination or movement synchronization is an often observed phenomenon in human behavior. Humans synchronize their gait when walking next to each other, they synchronize their postural sway when standing closely, and they also synchronize their movement behavior in many other situations of daily life. Why humans are doing this is an important question of ongoing research in many disciplines: apparently movement synchronization plays a role in children’s development and learning; it is related to our social and emotional behavior in interaction with others; it is an underlying principle in the organization of communication by means of language and gesture; and finally, models explaining movement synchronization between two individuals can also be extended to group behavior. Overall, one can say that movement synchronization is an important principle of human interaction behavior. Besides interacting with other humans, in recent years humans do more and more interact with technology. This was first expressed in the interaction with machines in industrial settings, was taken further to human-computer interaction and is now facing a new challenge: the interaction with active and autonomous machines, the interaction with robots. If the vision of today’s robot developers comes true, in the near future robots will be fully integrated not only in our workplace, but also in our private lives. They are supposed to support humans in activities of daily living and even care for them. These circumstances however require the development of interactional principles which the robot can apply to the direct interaction with humans. In this dissertation the problem of robots entering the human society will be outlined and the need for the exploration of human interaction principles that are transferable to human-robot interaction will be emphasized. Furthermore, an overview on human movement synchronization as a very important phenomenon in human interaction will be given, ranging from neural correlates to social behavior. The argument of this dissertation is that human movement synchronization is a simple but striking human interaction principle that can be applied in human-robot interaction to support human activity of daily living, demonstrated on the example of pick-and-place tasks. This argument is based on five publications. In the first publication, human movement synchronization is explored in goal-directed tasks which bare similar requirements as pick-and-place tasks in activities of daily living. In order to explore if a merely repetitive action of the robot is sufficient to encourage human movement synchronization, the second publication reports a human-robot interaction study in which a human interacts with a non-adaptive robot. Here however, movement synchronization between human and robot does not emerge, which underlines the need for adaptive mechanisms. Therefore, in the third publication, human adaptive behavior in goal-directed movement synchronization is explored. In order to make the findings from the previous studies applicable to human-robot interaction, in the fourth publication the development of an interaction model based on dynamical systems theory is outlined which is ready for implementation on a robotic platform. Following this, a brief overview on a first human-robot interaction study based on the developed interaction model is provided. The last publication describes an extension of the previous approach which also includes the human tendency to make use of events to adapt their movements to. Here, also a first human-robot interaction study is reported which confirms the applicability of the model. The dissertation concludes with a discussion on the presented findings in the light of human-robot interaction and psychological aspects of joint action research as well as the problem of mutual adaptation.Spontan auftretende Koordination oder Bewegungssynchronisierung ist ein häufig zu beobachtendes Phänomen im Verhalten von Menschen. Menschen synchronisieren ihre Schritte beim nebeneinander hergehen, sie synchronisieren die Schwingbewegung zum Ausgleich der Körperbalance wenn sie nahe beieinander stehen und sie synchronisieren ihr Bewegungsverhalten generell in vielen weiteren Handlungen des täglichen Lebens. Die Frage nach dem warum ist eine Frage mit der sich die Forschung in der Psychologie, Neuro- und Bewegungswissenschaft aber auch in der Sozialwissenschaft nach wie vor beschäftigt: offenbar spielt die Bewegungssynchronisierung eine Rolle in der kindlichen Entwicklung und beim Erlernen von Fähigkeiten und Verhaltensmustern; sie steht in direktem Bezug zu unserem sozialen Verhalten und unserer emotionalen Wahrnehmung in der Interaktion mit Anderen; sie ist ein grundlegendes Prinzip in der Organisation von Kommunikation durch Sprache oder Gesten; außerdem können Modelle, die Bewegungssynchronisierung zwischen zwei Individuen erklären, auch auf das Verhalten innerhalb von Gruppen ausgedehnt werden. Insgesamt kann man also sagen, dass Bewegungssynchronisierung ein wichtiges Prinzip im menschlichen Interaktionsverhalten darstellt. Neben der Interaktion mit anderen Menschen interagieren wir in den letzten Jahren auch zunehmend mit der uns umgebenden Technik. Hier fand zunächst die Interaktion mit Maschinen im industriellen Umfeld Beachtung, später die Mensch-Computer-Interaktion. Seit kurzem sind wir jedoch mit einer neuen Herausforderung konfrontiert: der Interaktion mit aktiven und autonomen Maschinen, Maschinen die sich bewegen und aktiv mit Menschen interagieren, mit Robotern. Sollte die Vision der heutigen Roboterentwickler Wirklichkeit werde, so werden Roboter in der nahen Zukunft nicht nur voll in unser Arbeitsumfeld integriert sein, sondern auch in unser privates Leben. Roboter sollen den Menschen in ihren täglichen Aktivitäten unterstützen und sich sogar um sie kümmern. Diese Umstände erfordern die Entwicklung von neuen Interaktionsprinzipien, welche Roboter in der direkten Koordination mit dem Menschen anwenden können. In dieser Dissertation wird zunächst das Problem umrissen, welches sich daraus ergibt, dass Roboter zunehmend Einzug in die menschliche Gesellschaft finden. Außerdem wird die Notwendigkeit der Untersuchung menschlicher Interaktionsprinzipien, die auf die Mensch-Roboter-Interaktion transferierbar sind, hervorgehoben. Die Argumentation der Dissertation ist, dass die menschliche Bewegungssynchronisierung ein einfaches aber bemerkenswertes menschliches Interaktionsprinzip ist, welches in der Mensch-Roboter-Interaktion angewendet werden kann um menschliche Aktivitäten des täglichen Lebens, z.B. Aufnahme-und-Ablege-Aufgaben (pick-and-place tasks), zu unterstützen. Diese Argumentation wird auf fünf Publikationen gestützt. In der ersten Publikation wird die menschliche Bewegungssynchronisierung in einer zielgerichteten Aufgabe untersucht, welche die gleichen Anforderungen erfüllt wie die Aufnahme- und Ablageaufgaben des täglichen Lebens. Um zu untersuchen ob eine rein repetitive Bewegung des Roboters ausreichend ist um den Menschen zur Etablierung von Bewegungssynchronisierung zu ermutigen, wird in der zweiten Publikation eine Mensch-Roboter-Interaktionsstudie vorgestellt in welcher ein Mensch mit einem nicht-adaptiven Roboter interagiert. In dieser Studie wird jedoch keine Bewegungssynchronisierung zwischen Mensch und Roboter etabliert, was die Notwendigkeit von adaptiven Mechanismen unterstreicht. Daher wird in der dritten Publikation menschliches Adaptationsverhalten in der Bewegungssynchronisierung in zielgerichteten Aufgaben untersucht. Um die so gefundenen Mechanismen für die Mensch-Roboter Interaktion nutzbar zu machen, wird in der vierten Publikation die Entwicklung eines Interaktionsmodells basierend auf Dynamischer Systemtheorie behandelt. Dieses Modell kann direkt in eine Roboterplattform implementiert werden. Anschließend wird kurz auf eine erste Studie zur Mensch- Roboter Interaktion basierend auf dem entwickelten Modell eingegangen. Die letzte Publikation beschreibt eine Weiterentwicklung des bisherigen Vorgehens welche der Tendenz im menschlichen Verhalten Rechnung trägt, die Bewegungen an Ereignissen auszurichten. Hier wird außerdem eine erste Mensch-Roboter- Interaktionsstudie vorgestellt, die die Anwendbarkeit des Modells bestätigt. Die Dissertation wird mit einer Diskussion der präsentierten Ergebnisse im Kontext der Mensch-Roboter-Interaktion und psychologischer Aspekte der Interaktionsforschung sowie der Problematik von beiderseitiger Adaptivität abgeschlossen

    From Constraints to Opportunities: Efficient Object Detection Learning for Humanoid Robots

    Get PDF
    Reliable perception and efficient adaptation to novel conditions are priority skills for robots that function in ever-changing environments. Indeed, autonomously operating in real world scenarios raises the need of identifying different context\u2019s states and act accordingly. Moreover, the requested tasks might not be known a-priori, requiring the system to update on-line. Robotic platforms allow to gather various types of perceptual information due to the multiple sensory modalities they are provided with. Nonetheless, latest results in computer vision motivate a particular interest in visual perception. Specifically, in this thesis, I mainly focused on the object detection task since it can be at the basis of more sophisticated capabilities. The vast advancements in latest computer vision research, brought by deep learning methods, are appealing in a robotic setting. However, their adoption in applied domains is not straightforward since adapting them to new tasks is strongly demanding in terms of annotated data, optimization time and computational resources. These requirements do not generally meet current robotics constraints. Nevertheless, robotic platforms and especially humanoids present opportunities that can be exploited. The sensors they are provided with represent precious sources of additional information. Moreover, their embodiment in the workspace and their motion capabilities allow for a natural interaction with the environment. Motivated by these considerations, in this Ph.D project, I mainly aimed at devising and developing solutions able to integrate the worlds of computer vision and robotics, by focusing on the task of object detection. Specifically, I dedicated a large amount of effort in alleviating state-of-the-art methods requirements in terms of annotated data and training time, preserving their accuracy by exploiting robotics opportunity
    • …
    corecore