9,697 research outputs found

    Do (and say) as I say: Linguistic adaptation in human-computer dialogs

    Get PDF
    © Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each other’s vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in human–computer dialogs, based on empirical data collected in a simulated human–computer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in human–computer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for human–computer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the system’s grammar and lexicon

    An emotion and memory model for social robots : a long-term interaction

    Get PDF
    In this thesis, we investigate the role of emotions and memory in social robotic companions. In particular, our aim is to study the effect of an emotion and memory model towards sustaining engagement and promoting learning in a long-term interaction. Our Emotion and Memory model was based on how humans create memory under various emotional events/states. The model enabled the robot to create a memory account of user's emotional events during a long-term child-robot interaction. The robot later adapted its behaviour through employing the developed memory in the following interactions with the users. The model also had an autonomous decision-making mechanism based on reinforcement learning to select behaviour according to the user preference measured through user's engagement and learning during the task. The model was implemented on the NAO robot in two different educational setups. Firstly, to promote user's vocabulary learning and secondly, to inform how to calculate area and perimeter of regular and irregular shapes. We also conducted multiple long-term evaluations of our model with children at the primary schools to verify its impact on their social engagement and learning. Our results showed that the behaviour generated based on our model was able to sustain social engagement. Additionally, it also helped children to improve their learning. Overall, the results highlighted the benefits of incorporating memory during child-Robot Interaction for extended periods of time. It promoted personalisation and reflected towards creating a child-robot social relationship in a long-term interaction

    Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task

    Get PDF
    Current approaches do not allow robots to execute a task and simultaneously convey emotions to users using their body motions. This paper explores the capabilities of the Jacobian null space of a humanoid robot to convey emotions. A task priority formulation has been implemented in a Pepper robot which allows the specification of a primary task (waving gesture, transportation of an object, etc.) and exploits the kinematic redundancy of the robot to convey emotions to humans as a lower priority task. The emotions, defined by Mehrabian as points in the pleasure–arousal–dominance space, generate intermediate motion features (jerkiness, activity and gaze) that carry the emotional information. A map from this features to the joints of the robot is presented. A user study has been conducted in which emotional motions have been shown to 30 participants. The results show that happiness and sadness are very well conveyed to the user, calm is moderately well conveyed, and fear is not well conveyed. An analysis on the dependencies between the motion features and the emotions perceived by the participants shows that activity correlates positively with arousal, jerkiness is not perceived by the user, and gaze conveys dominance when activity is low. The results indicate a strong influence of the most energetic motions of the emotional task and point out new directions for further research. Overall, the results show that the null space approach can be regarded as a promising mean to convey emotions as a lower priority task.Postprint (author's final draft

    Modeling and Design Analysis of Facial Expressions of Humanoid Social Robots Using Deep Learning Techniques

    Get PDF
    abstract: A lot of research can be seen in the field of social robotics that majorly concentrate on various aspects of social robots including design of mechanical parts and their move- ment, cognitive speech and face recognition capabilities. Several robots have been developed with the intention of being social, like humans, without much emphasis on how human-like they actually look, in terms of expressions and behavior. Fur- thermore, a substantial disparity can be seen in the success of results of any research involving ”humanizing” the robots’ behavior, or making it behave more human-like as opposed to research into biped movement, movement of individual body parts like arms, fingers, eyeballs, or human-like appearance itself. The research in this paper in- volves understanding why the research on facial expressions of social humanoid robots fails where it is not accepted completely in the current society owing to the uncanny valley theory. This paper identifies the problem with the current facial expression research as information retrieval problem. This paper identifies the current research method in the design of facial expressions of social robots, followed by using deep learning as similarity evaluation technique to measure the humanness of the facial ex- pressions developed from the current technique and further suggests a novel solution to the facial expression design of humanoids using deep learning.Dissertation/ThesisMasters Thesis Computer Science 201

    End-user programming of a social robot by dialog

    Get PDF
    One of the main challenges faced by social robots is how to provide intuitive, natural and enjoyable usability for the end-user. In our ordinary environment, social robots could be important tools for education and entertainment (edutainment) in a variety of ways. This paper presents a Natural Programming System (NPS) that is geared to non-expert users. The main goal of such a system is to provide an enjoyable interactive platform for the users to build different programs within their social robot platform. The end-user can build a complex net of actions and conditions (a sequence) in a social robot via mixed-initiative dialogs and multimodal interaction. The system has been implemented and tested in Maggie, a real social robot with multiple skills, conceived as a general HRI researching platform. The robot's internal features (skills) have been implemented to be verbally accessible to the end-user, who can combine them into others that are more complex following a bottom-up model. The built sequence is internally implemented as a Sequence Function Chart (SFC), which allows parallel execution, modularity and re-use. A multimodal Dialog Manager System (DMS) takes charge of keeping the coherence of the interaction. This work is thought for bringing social robots closer to non-expert users, who can play the game of "teaching how to do things" with the robot.The research leading to these results has received funding from the RoboCity2030-II-CM project (S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU. The authors also gratefully acknowledge the funds provided by the Spanish Ministry of Science and Innovation (MICINN) through the project named “A New Approach to Social Robots” (AROS) DPI2008-01109
    • 

    corecore