56 research outputs found

    Acquiring and Maintaining Knowledge by Natural Multimodal Dialog

    Get PDF

    Intelligent Management of Hierarchical Behaviors Using a NAO Robot as a Vocational Tutor

    Get PDF
    In order to create an intelligent system which can hold an interview using the NAO robot as an interviewer playing the role of a vocational tutor were classified and categorized twenty behaviors within five personality profiles. Five basic emotions are considered: Anger, boredom, interest, surprise and joy. Selected behaviors are grouped according to these five different emotions. Common behaviors (e.g., movements or body postures) used by the robot (who assumes the role of vocational tutor) during vocational guidance sessions will be based on a theory of personality traits called the "Five Factor Model". In this context, a predefined set of questions will be asked by the robot according to a theoretical model called "Orientation Model" about the person's vocational preferences. Therefore, NAO can react as conveniently as possible during the interview according to the score of the answer given by the person to the question posed and its personality type. Additionally, based on the answers to these questions, it is established a vocational profile and the robot can to emit a recommendation about person vocation. The results obtained show how the intelligent selection of behaviors can be successfully achieved through the proposed approach, making the interaction between a human and a robot friendlier

    ACII 2009: Affective Computing and Intelligent Interaction. Proceedings of the Doctoral Consortium 2009

    Get PDF

    Nonlinear Storytelling Approach to Developing Computational Thinking Skills

    Get PDF
    Current methods for developing computational thinking skills usually have a technical and programming-centric approach and are not suitable for all people. In this research, the use of nonlinear storytelling as an educational method was examined. The specific interest was to analyze its relationship with the concept of computational thinking and to investigate if nonlinear storytelling can be used as a low-threshold method for teaching fundamental computational thinking skills. This research situates itself in computer science education. It consists of four independent studies. Study I investigates how nonlinear storytelling can be integrated into an adult education course for developing basic information technology skills. Special attention was given to understanding the role of storytelling in the process. The result of this study was a method that integrates nonlinear storytelling into educational game development. Study II studied the relationship between nonlinear stories and computational thinking by examining how typical computer programs are implemented using stories. The study shows that nonlinear stories are best suited for implementing finite state machine programs and programs that include interaction. The natural character of applicability indicates that nonlinear storytelling can improve students’ readiness for learning programming skills. In study III, experiences and observations made at the end of the aforementioned adult education course are reported. The technical quality of the stories collected (N = 14) was investigated and common challenges in the storytelling process such as understanding hyperlinking and its purpose in gamification were identified. In this study, a practical classification for storytelling software and metrics for analyzing stories were developed. Finally, study IV focused on investigating whether the concept of computational thinking allows broader interpretations compared to how it is traditionally used. The concept of computational thinking was explored by using the Extended Mind thesis by Clark and Chalmers. Analysis showed that it is reasonable to expand the concept beyond the traditional computer programming-based interpretation.Nykyiset menetelmät algoritmisen ajattelun opetuksessa ovat usein teknisiä ja ohjelmointikeskeisiä eivätkä ne sovi kaikille kohderyhmille. Tässä työssä selvitettiin epälineaaristen tarinoiden käyttöä opetuksessa. Erityinen kiinnostuksen kohde oli se, mikä on epälineaaristen tarinoiden suhde algoritmiseen ajatteluun ja voiko epälineaarisia tarinoita käyttää matalan kynnyksen menetelmänä algoritmisen ajattelun harjoittamiseen. Tämä tutkimus sijoittuu tietotekniikan opetuksen alalle. Työ koostuu neljästä osatutkimuksesta. Osatutkimuksessa I tutkittiin, miten epälineaarinen tarinankerronta voidaan ottaa osaksi tietotekniikkavalmiuksia kehittävää aikuiskoulutusta. Tutkimuksen tuloksena syntyi menetelmä, joka yhdistää epälineaarisen tarinankerronnan pelinkehitykseen. Osatutkimuksessa II tutkittiin epälineaaristen tarinoiden suhdetta algoritmiseen ajatteluun selvittämällä tyypillisten tietokoneohjelmien toteutuksia. Toteutuksista kävi ilmi, että epälineaariset tarinat sopivat erityisesti äärellisillä automaateilla esitettävissä olevien ohjelmien sekä interaktiivisten ohjelmien toteuttamiseen. Tarinallisten toteutusten luontevuus osoitti, että tarinoiden avulla voidaan harjoittaa opiskelijoiden ohjelmointivalmiuksia. Osatutkimuksessa III raportoitiin edellä mainitun tietotekniikkavalmiuksia kehittävän aikuiskoulutuksen aikana tehtyjä havaintoja ja saatuja kokemuksia hankkeen loputtua. Erityisesti selvitettiin hankkeessa kerättyjen tarinoiden (N = 14) teknistä laatua sekä yleisimpiä ongelmia hyperlinkkien toiminnan sekä pelillisen merkityksen ymmärtäminen kanssa. Osatutkimuksessa IV selvitettiin, miten algoritmisen ajattelun käsitettä voidaan tulkita sen perinteistä tulkintaa laajemmin. Problematiikkaa lähestyttiin Clarkin ja Chalmersin laajennetun mielen hypoteesia käyttäen. Epälineaaristen tarinoiden yhteyttä algoritmiseen ajatteluun ei ole tutkittu aiemmin, joten aihe on uusi. Tämän tutkimuksen perusteella epälineaarista tarinankerrontaa voidaan soveltaa algoritmisen ajattelun harjoittamiseen. Uudenlainen lähestymistapa kuitenkin haastaa algoritmisen ajattelun käsitteen, joka on perinteisesti ymmärretty ohjelmoinnin kautta

    Grounding the Interaction : Knowledge Management for Interactive Robots

    Get PDF
    Avec le développement de la robotique cognitive, le besoin d’outils avancés pour représenter, manipuler, raisonner sur les connaissances acquises par un robot a clairement été mis en avant. Mais stocker et manipuler des connaissances requiert tout d’abord d’éclaircir ce que l’on nomme connaissance pour un robot, et comment celle-ci peut-elle être représentée de manière intelligible pour une machine. \ud \ud Ce travail s’efforce dans un premier temps d’identifier de manière systématique les besoins en terme de représentation de connaissance des applications robotiques modernes, dans le contexte spécifique de la robotique de service et des interactions homme-robot. Nous proposons une typologie originale des caractéristiques souhaitables des systèmes de représentation des connaissances, appuyée sur un état de l’art détaillé des outils existants dans notre communauté. \ud \ud Dans un second temps, nous présentons en profondeur ORO, une instanciation particulière d’un système de représentation et manipulation des connaissances, conçu et implémenté durant la préparation de cette thèse. Nous détaillons le fonctionnement interne du système, ainsi que son intégration dans plusieurs architectures robotiques complètes. Un éclairage particulier est donné sur la modélisation de la prise de perspective dans le contexte de l’interaction, et de son interprétation en terme de théorie de l’esprit. \ud \ud La troisième partie de l’étude porte sur une application importante des systèmes de représentation des connaissances dans ce contexte de l’interaction homme-robot : le traitement du dialogue situé. Notre approche et les algorithmes qui amènent à l’ancrage interactif de la communication verbale non contrainte sont présentés, suivis de plusieurs expériences menées au Laboratoire d’Analyse et d’Architecture des Systèmes au CNRS à Toulouse, et au groupe Intelligent Autonomous System de l’université technique de Munich. Nous concluons cette thèse sur un certain nombre de considérations sur la viabilité et l’importance d’une gestion explicite des connaissances des agents, ainsi que par une réflexion sur les éléments encore manquant pour réaliser le programme d’une robotique “de niveau humain”.-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------With the rise of the so-called cognitive robotics, the need of advanced tools to store, manipulate, reason about the knowledge acquired by the robot has been made clear. But storing and manipulating knowledge requires first to understand what the knowledge itself means to the robot and how to represent it in a machine-processable way. \ud \ud This work strives first at providing a systematic study of the knowledge requirements of modern robotic applications in the context of service robotics and human-robot interaction. What are the expressiveness requirement for a robot? what are its needs in term of reasoning techniques? what are the requirement on the robot's knowledge processing structure induced by other cognitive functions like perception or decision making? We propose a novel typology of desirable features for knowledge representation systems supported by an extensive review of existing tools in our community. \ud \ud In a second part, the thesis presents in depth a particular instantiation of a knowledge representation and manipulation system called ORO, that has been designed and implemented during the preparation of the thesis. We elaborate on the inner working of this system, as well as its integration into several complete robot control stacks. A particular focus is given to the modelling of agent-dependent symbolic perspectives and their relations to theories of mind. \ud \ud The third part of the study is focused on the presentation of one important application of knowledge representation systems in the human-robot interaction context: situated dialogue. Our approach and associated algorithms leading to the interactive grounding of unconstrained verbal communication are presented, followed by several experiments that have taken place both at the Laboratoire d'Analyse et d'Architecture des Systèmes at CNRS, Toulouse and at the Intelligent Autonomous System group at Munich Technical University. \ud \ud The thesis concludes on considerations regarding the viability and importance of an explicit management of the agent's knowledge, along with a reflection on the missing bricks in our research community on the way towards "human level robots". \ud \u

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Approaching human-like spatial awareness in social robotics: an investigation of spatial interaction strategies with a receptionist robot

    Get PDF
    Holthaus P. Approaching human-like spatial awareness in social robotics: an investigation of spatial interaction strategies with a receptionist robot. Bielefeld: Universität Bielefeld.; 2014.This doctoral thesis investigates the influence of social signals in the spatial domain that aim to raise a robot’s awareness towards its human interlocutor. A concept of spatial awareness thereby extends the robot’s possibilities for expressing its knowledge about the situation as well as its own capabilities. As a result, especially untrained users can build up more appropriate expectations about the current situation which supposedly leads to a minimization of misunderstandings and thereby an enhancement of user experience. On the background of research that investigates communication among humans, relations are drawn in order to utilize gained insights for developing a robot that is capable of acting socially intelligent with regard to human-like treatment of spatial configurations and signals. In a study-driven approach, an integrated concept of spatial awareness is therefore proposed. An important aspect of that concept, which is founded in its spatial extent, lies in its aspiration to cover a holistic encounter between human and robot with the goal to improve user experience from the first sight until the end of reciprocal awareness. It describes how spatial configurations and signals can be perceived and interpreted in a social robot. Furthermore, it also presents signals and behavioral properties for such a robot that target at influencing said configurations and enhancing robot verbosity. In order to approve the concept’s validity in realistic settings, an interactive scenario is presented in the form of a receptionist robot to which it is applied. In the context of this setup, a comprehensive user study is conducted that verifies the implementation of spatial awareness to be beneficial for an interaction with humans that are naive to the subject. Furthermore, the importance of addressing an entire encounter in human-robot interaction is confirmed as well as a strong interdependency of a robot’s social signals among each other

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    Trust in Robots

    Get PDF
    Robots are increasingly becoming prevalent in our daily lives within our living or working spaces. We hope that robots will take up tedious, mundane or dirty chores and make our lives more comfortable, easy and enjoyable by providing companionship and care. However, robots may pose a threat to human privacy, safety and autonomy; therefore, it is necessary to have constant control over the developing technology to ensure the benevolent intentions and safety of autonomous systems. Building trust in (autonomous) robotic systems is thus necessary. The title of this book highlights this challenge: “Trust in robots—Trusting robots”. Herein, various notions and research areas associated with robots are unified. The theme “Trust in robots” addresses the development of technology that is trustworthy for users; “Trusting robots” focuses on building a trusting relationship with robots, furthering previous research. These themes and topics are at the core of the PhD program “Trust Robots” at TU Wien, Austria
    • …
    corecore