5,376 research outputs found

    The color of smiling: computational synaesthesia of facial expressions

    Get PDF
    This note gives a preliminary account of the transcoding or rechanneling problem between different stimuli as it is of interest for the natural interaction or affective computing fields. By the consideration of a simple example, namely the color response of an affective lamp to a sensed facial expression, we frame the problem within an information- theoretic perspective. A full justification in terms of the Information Bottleneck principle promotes a latent affective space, hitherto surmised as an appealing and intuitive solution, as a suitable mediator between the different stimuli.Comment: Submitted to: 18th International Conference on Image Analysis and Processing (ICIAP 2015), 7-11 September 2015, Genova, Ital

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children

    生命維持にかかわる生理現象を介した人間 : ロボットのコミュニケーションと身体情動モデルの設計

    Get PDF
    関西大学In this dissertation, we focus on physiological phenomena of robots as the expressive modality of their inner states and discuss the effectiveness of a robot expressing physiological phenomena, which are indispensable for living. We designed a body-emotion model showing the relationship between a) emotion as the inner state of the robot and b) physiological phenomena as physical changes, and we discuss the communication between humans and robots through involuntary physiological expression based on the model. In recent years, various robots for use in mental health care and communication support in medical/nursing care have been developed. The purpose of these systems is to enable communication between a robot and patients by an active approach of the robot through sound and body movement. In contrast to conventional approaches, our research is based on involuntary emotional expression through physiological phenomena of the robot. Physiological phenomena including breathing, heartbeat, and body temperature are essential functions for life activities, and these are closely related to the inner state of humans because physiological phenomena are caused by the emotional reaction of the limbic system transmitted via the autonomic nervous system. In human-robot communication through physical contact, we consider that physiological phenomena are one of the most important nonverbal modalities of the inner state as involuntary expressions. First, we focused on the robots\u27 expression of physiological phenomena, proposed the body-emotion model (BEM), which concerns the relationship between the inner state of robots and their involuntary physical reactions. We proposed a stuffed-toy robot system: BREAR―which has a mechanical structure to express the breathing, heartbeat, temperature and bodily movement. The result of experiment showed that a heartbeat, breathing and body temperature can express the robot\u27s living state and that the breathing speed is highly related to the robot\u27s emotion of arousal. We reviewed the experimental results and emotional generation mechanisms and discussed the design of the robot based on BEM. Based on our verification results, we determined that the design of the BEM-which involves the perception of the external situation, the matching with the memory, the change of the autonomic nervous parameter and the representation of the physiological phenomena - that is based on the relationship between the autonomic nervous system and emotional arousal is effective. Second, we discussed indirect communication between humans and robots through physiological phenomena - which consist of the breathing, heartbeats and body temperature - that express robots\u27 emotions. We set a situation with joint attention from the robot and user on emotional content and evaluated whether both the user\u27s emotional response to the content and the user\u27s impression of relationship between the user and the robot were changed by the physiological expressions of the robot. The results suggest that the physiological expression of the robot makes the user\u27s own emotions in the experience more excited or suppressed and that the robot\u27s expression increases impressions of closeness and sensitivity. Last, we discussed the future perspective of human-robot communication by physiological phenomena. Regarding the representation of the robots\u27 sense of life, it is thought that the user\u27s recognition that the robot is alive improves not only the moral effect on the understanding of the finiteness of life but also the attachment to the robot in long-term communication. Regarding the emotional expression mechanism based on life, it is expected that the robot can display a complicated internal state close to that of humans by combining intentionally expressed emotions and involuntary emotional expressions. If a robot can express a combination of realistic voluntary expressions, such as facial expressions and body movements, in combination with real involuntary expressions by using the real intentions and lying, it can be said that the robot has a more complicated internal state than that of a pet. By using a robot expressing a living state through physiological phenomena, it can be expected that the effect of mental care will exceed that of animal therapy, and we expect to provide care and welfare support in place of human beings

    Bridging the gap between emotion and joint action

    Get PDF
    Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies
    corecore