367 research outputs found

    Modulating the Non-Verbal Social Signals of a Humanoid Robot

    Get PDF
    In this demonstration we present a repertoire of social signals generated by the humanoid robot Pepper in the context of the EU-funded project MuMMER. The aim of this research is to provide the robot with the expressive capabilities required to interact with people in real-world public spaces such as shopping malls-and being able to control the non-verbal behaviour of such a robot is key to engaging with humans in an effective way. We propose an approach to modulating the non-verbal social signals of the robot based on systematically varying the amplitude and speed of the joint motions and gathering user evaluations of the resulting gestures. We anticipate that the humans' perception of the robot behaviour will be influenced by these modulations

    An android architecture for bio-inspired honest signalling in Human-Humanoid Interaction

    Get PDF
    This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided

    Expressing Robot Personality through Talking Body Language

    Get PDF
    Social robots must master the nuances of human communication as a mean to convey an effective message and generate trust. It is well-known that non-verbal cues are very important in human interactions, and therefore a social robot should produce a body language coherent with its discourse. In this work, we report on a system that endows a humanoid robot with the ability to adapt its body language according to the sentiment of its speech. A combination of talking beat gestures with emotional cues such as eye lightings, body posture of voice intonation and volume permits a rich variety of behaviors. The developed approach is not purely reactive, and it easily allows to assign a kind of personality to the robot. We present several videos with the robot in two different scenarios, and showing discrete and histrionic personalities.This work has been partially supported by the Basque Government (IT900-16 and Elkartek 2018/00114), the Spanish Ministry of Economy and Competitiveness (RTI 2018-093337-B-100, MINECO/FEDER, EU)

    Towards a synthetic tutor assistant: The EASEL project and its architecture

    Get PDF
    Robots are gradually but steadily being introduced in our daily lives. A paramount application is that of education, where robots can assume the role of a tutor, a peer or simply a tool to help learners in a specific knowledge domain. Such endeavor posits specific challenges: affective social behavior, proper modelling of the learner’s progress, discrimination of the learner’s utterances, expressions and mental states, which, in turn, require an integrated architecture combining perception, cognition and action. In this paper we present an attempt to improve the current state of robots in the educational domain by introducing the EASEL EU project. Specifically, we introduce the EASEL’s unified robot architecture, an innovative Synthetic Tutor Assistant (STA) whose goal is to interactively guide learners in a science-based learning paradigm, allowing us to achieve such rich multimodal interactions

    Motion for cooperation and vitality in Human-robot interaction

    Get PDF
    In social interactions, human movement is a rich source of information for all those who take part in the collaboration. In fact, a variety of intuitive messages are communicated through motion and continuously inform the partners about the future unfolding of the actions. A similar exchange of implicit information could support movement coordination in the context of Human-Robot Interaction. Also the style of an action, i.e. the way it is performed, has a strong influence on interaction between humans. The same gesture has different consequences when it is performed aggressively or kindly, and humans are very sensitive to these subtle differences in others\u2019 behaviors. During the three years of my PhD, I focused on these two aspects of human motion. In a firs study, we investigated how implicit signaling in an interaction with a humanoid robot can lead to emergent coordination in the form of automatic speed adaptation. In particular, we assessed whether different cultures \u2013 specifically Japanese and Italian \u2013 have a different impact on motor resonance and synchronization in HRI. Japanese people show a higher general acceptance toward robots when compared with Western cultures. Since acceptance, or better affiliation, is tightly connected to imitation and mimicry, we hypothesized a higher degree of speed imitation for Japanese participants when compared to Italians. In the experimental studies undertaken both in Japan and Italy,we observed that cultural differences do not impact on the natural predisposition of subjects to adapt to the robot. In a second study, we investigated how to endow a humanoid robot with behaviors expressing different vitality forms, by modulating robot action kinematics and voice. Drawing inspiration from humans, we modified actions and voice commands performed by the robot to convey an aggressive or kind attitude. In a series of experiments we demonstrated that the humanoid was consistently perceived as aggressive or kind. Human behavior changed in response to the different robot attitudes and matched the behavior of iCub, in fact participants were faster when the robot was aggressive and slower when the robot was gentle. The opportunity of humanoid behavior to express vitality enriches the array of nonverbal communication that can be exploited by robots to foster seamless interaction. Such behavior might be crucial in emergency and in authoritative situations in which the robot should instinctively be perceived as assertive and in charge, as in case of police robots or teachers

    Multisensory 3D saliency for artficial attention systems

    Get PDF
    In this paper we present proof-of-concept for a novel solution consisting of a short-term 3D memory for artificial attention systems, loosely inspired in perceptual processes believed to be implemented in the human brain. Our solution supports the implementation of multisensory perception and stimulus-driven processes of attention. For this purpose, it provides (1) knowledge persistence with temporal coherence tackling potential salient regions outside the field of view, via a panoramic, log-spherical inference grid; (2) prediction, by using estimates of local 3D velocity to anticipate the effect of scene dynamics; (3) spatial correspondence between volumetric cells potentially occupied by proto-objects and their corresponding multisensory saliency scores. Visual and auditory signals are processed to extract features that are then filtered by a proto-object segmentation module that employs colour and depth as discriminatory traits. We consider as features, apart from the commonly used colour and intensity contrast, colour bias, the presence of faces, scene dynamics and also loud auditory sources. Combining conspicuity maps derived from these features we obtain a 2D saliency map, which is then processed using the probability of occupancy in the scene to construct the final 3D saliency map as an additional layer of the Bayesian Volumetric Map (BVM) inference grid

    Learning,Generating and Adapting Wave Gestures for Expressive Human-Robot Interaction

    Full text link
    This study proposes a novel imitation learning approach for the stochastic generation of human-like rhythmic wave gestures and their modulation for effective non-verbal communication through a probabilistic formulation using joint angle data from human demonstrations. This is achieved by learning and modulating the overall expression characteristics of the gesture (e.g., arm posture, waving frequency and amplitude) in the frequency domain. The method was evaluated on simulated robot experiments involving a robot with a manipulator of 6 degrees of freedom. The results show that the method provides efficient encoding and modulation of rhythmic movements and ensures variability in their execution.Comment: 3 pages, 3 figure

    Investigation of Methods to Create Future Multimodal Emotional Data for Robot Interactions in Patients with Schizophrenia : A Case Study

    Get PDF
    Rapid progress in humanoid robot investigations offers possibilities for improving the competencies of people with social disorders, although this improvement of humanoid robots remains unexplored for schizophrenic people. Methods for creating future multimodal emotional data for robot interactions were studied in this case study of a 40-year-old male patient with disorganized schizophrenia without comorbidities. The qualitative data included heart rate variability (HRV), video-audio recordings, and field notes. HRV, Haar cascade classifier (HCC), and Empath API© were evaluated during conversations between the patient and robot. Two expert nurses and one psychiatrist evaluated facial expressions. The research hypothesis questioned whether HRV, HCC, and Empath API© are useful for creating future multimodal emotional data about robot–patient interactions. The HRV analysis showed persistent sympathetic dominance, matching the human–robot conversational situation. The result of HCC was in agreement with that of human observation, in the case of rough consensus. In the case of observed results disagreed upon by experts, the HCC result was also different. However, emotional assessments by experts using Empath API© were also found to be inconsistent. We believe that with further investigation, a clearer identification of methods for multimodal emotional data for robot interactions can be achieved for patients with schizophrenia

    Cognitive, neural, and social mechanisms of rhythmic interpersonal coordination

    Get PDF
    Humans possess the exceptional capacity to temporally coordinate their movements with one another with a high degree of accuracy, precision, and flexibility. Musical ensemble performance is a refined example of this, where a range of cognitive and sensory-motor processes work together to support rhythmic interpersonal coordination. However, the influence of social factors on the underlying cognitive-motor and neural mechanisms that facilitate rhythmic interpersonal coordination is yet to be established. This thesis draws on theoretical perspectives related to joint action, including co-representation, self-other integration and segregation, and theoretical models of sensorimotor synchronisation to consider this topic. Three experiments were conducted to investigate how social factors influence rhythmic interpersonal coordination. This broad empirical question was broken down by considering both extrinsic factors—such as the social context and perceived characteristics of an interaction partner (e.g. the degree of partner intentionality and responsiveness)—as well as intrinsic social factors, such as individual differences in attitudes and social preferences. This thesis concludes that extrinsic and intrinsic social factors affect rhythmic interpersonal coordination at multiple levels. A key aspect of this influence relates to how people regulate the integration and segregation of their representations of self and others. However, importantly, these effects are mediated by individual differences in intrinsic social factors such as personal preferences and biases. Top-down processes related to beliefs thus influence bottom-up sensorimotor processes during joint action, but the nature of this influence appears to be different for different people. This outcome highlights the necessity of taking individual differences into account, particularly when investigating the nuances of social processing during dynamic social interactions. Furthermore, the current findings suggest that beliefs about a partner during social interaction may be just as, or even more so, influential on performance than the actual characteristics of the partner. Recognising the potency of social beliefs has implications not only for research into basic psychological mechanisms underpinning rhythmic interpersonal coordination, but also for understanding the broader social dynamics of real-life situations involving cooperative joint action understanding the broader social dynamics of real-life situations involving cooperative joint action
    corecore