4,077 research outputs found

    生命維持にかかわる生理現象を介した人間 : ロボットのコミュニケーションと身体情動モデルの設計

    Get PDF
    関西大学In this dissertation, we focus on physiological phenomena of robots as the expressive modality of their inner states and discuss the effectiveness of a robot expressing physiological phenomena, which are indispensable for living. We designed a body-emotion model showing the relationship between a) emotion as the inner state of the robot and b) physiological phenomena as physical changes, and we discuss the communication between humans and robots through involuntary physiological expression based on the model. In recent years, various robots for use in mental health care and communication support in medical/nursing care have been developed. The purpose of these systems is to enable communication between a robot and patients by an active approach of the robot through sound and body movement. In contrast to conventional approaches, our research is based on involuntary emotional expression through physiological phenomena of the robot. Physiological phenomena including breathing, heartbeat, and body temperature are essential functions for life activities, and these are closely related to the inner state of humans because physiological phenomena are caused by the emotional reaction of the limbic system transmitted via the autonomic nervous system. In human-robot communication through physical contact, we consider that physiological phenomena are one of the most important nonverbal modalities of the inner state as involuntary expressions. First, we focused on the robots\u27 expression of physiological phenomena, proposed the body-emotion model (BEM), which concerns the relationship between the inner state of robots and their involuntary physical reactions. We proposed a stuffed-toy robot system: BREAR―which has a mechanical structure to express the breathing, heartbeat, temperature and bodily movement. The result of experiment showed that a heartbeat, breathing and body temperature can express the robot\u27s living state and that the breathing speed is highly related to the robot\u27s emotion of arousal. We reviewed the experimental results and emotional generation mechanisms and discussed the design of the robot based on BEM. Based on our verification results, we determined that the design of the BEM-which involves the perception of the external situation, the matching with the memory, the change of the autonomic nervous parameter and the representation of the physiological phenomena - that is based on the relationship between the autonomic nervous system and emotional arousal is effective. Second, we discussed indirect communication between humans and robots through physiological phenomena - which consist of the breathing, heartbeats and body temperature - that express robots\u27 emotions. We set a situation with joint attention from the robot and user on emotional content and evaluated whether both the user\u27s emotional response to the content and the user\u27s impression of relationship between the user and the robot were changed by the physiological expressions of the robot. The results suggest that the physiological expression of the robot makes the user\u27s own emotions in the experience more excited or suppressed and that the robot\u27s expression increases impressions of closeness and sensitivity. Last, we discussed the future perspective of human-robot communication by physiological phenomena. Regarding the representation of the robots\u27 sense of life, it is thought that the user\u27s recognition that the robot is alive improves not only the moral effect on the understanding of the finiteness of life but also the attachment to the robot in long-term communication. Regarding the emotional expression mechanism based on life, it is expected that the robot can display a complicated internal state close to that of humans by combining intentionally expressed emotions and involuntary emotional expressions. If a robot can express a combination of realistic voluntary expressions, such as facial expressions and body movements, in combination with real involuntary expressions by using the real intentions and lying, it can be said that the robot has a more complicated internal state than that of a pet. By using a robot expressing a living state through physiological phenomena, it can be expected that the effect of mental care will exceed that of animal therapy, and we expect to provide care and welfare support in place of human beings

    Automatic Measurement of Affect in Dimensional and Continuous Spaces: Why, What, and How?

    Get PDF
    This paper aims to give a brief overview of the current state-of-the-art in automatic measurement of affect signals in dimensional and continuous spaces (a continuous scale from -1 to +1) by seeking answers to the following questions: i) why has the field shifted towards dimensional and continuous interpretations of affective displays recorded in real-world settings? ii) what are the affect dimensions used, and the affect signals measured? and iii) how has the current automatic measurement technology been developed, and how can we advance the field

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    On the Possibility of Robots Having Emotions

    Get PDF
    I argue against the commonly held intuition that robots and virtual agents will never have emotions by contending robots can have emotions in a sense that is functionally similar to humans, even if the robots\u27 emotions are not exactly equivalent to those of humans. To establish a foundation for assessing the robots\u27 emotional capacities, I first define what emotions are by characterizing the components of emotion consistent across emotion theories. Second, I dissect the affective-cognitive architecture of MIT\u27s Kismet and Leonardo, two robots explicitly designed to express emotions and to interact with humans, in order to explore whether they have emotions. I argue that, although Kismet and Leonardo lack the subjective feelings component of emotion, they are capable of having emotions

    Different impressions of other agents obtained through social interaction uniquely modulate dorsal and ventral pathway activities in the social human brain

    Get PDF
    Internal (neuronal) representations in the brain are modified by our experiences, and this phenomenon is not unique to sensory and motor systems. Here, we show that different impressions obtained through social interaction with a variety of agents uniquely modulate activity of dorsal and ventral pathways of the brain network that mediates human social behavior. We scanned brain activity with functional magnetic resonance imaging (fMRI) in 16 healthy volunteers when they performed a simple matching-pennies game with a human, human-like android, mechanical robot, interactive robot, and a computer. Before playing this game in the scanner, participants experienced social interactions with each opponent separately and scored their initial impressions using two questionnaires. We found that the participants perceived opponents in two mental dimensions: one represented “mind-holderness” in which participants attributed anthropomorphic impressions to some of the opponents that had mental functions, while the other dimension represented “mind-readerness” in which participants characterized opponents as intelligent. Interestingly, this “mind-readerness” dimension correlated to participants frequently changing their game tactic to prevent opponents from envisioning their strategy, and this was corroborated by increased entropy during the game. We also found that the two factors separately modulated activity in distinct social brain regions. Specifically, mind-holderness modulated activity in the dorsal aspect of the temporoparietal junction (TPJ) and medial prefrontal and posterior paracingulate cortices, while mind-readerness modulated activity in the ventral aspect of TPJ and the temporal pole. These results clearly demonstrate that activity in social brain networks is modulated through pre-scanning experiences of social interaction with a variety of agents. Furthermore, our findings elucidated the existence of two distinct functional networks in the social human brain. Social interaction with anthropomorphic or intelligent-looking agents may distinctly shape the internal representation of our social brain, which may in turn determine how we behave for various agents that we encounter in our society

    From multicultural agents to culture‐aware robots

    Get PDF

    Bridging the gap between emotion and joint action

    Get PDF
    Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies

    Continuous Analysis of Affect from Voice and Face

    Get PDF
    Human affective behavior is multimodal, continuous and complex. Despite major advances within the affective computing research field, modeling, analyzing, interpreting and responding to human affective behavior still remains a challenge for automated systems as affect and emotions are complex constructs, with fuzzy boundaries and with substantial individual differences in expression and experience [7]. Therefore, affective and behavioral computing researchers have recently invested increased effort in exploring how to best model, analyze and interpret the subtlety, complexity and continuity (represented along a continuum e.g., from −1 to +1) of affective behavior in terms of latent dimensions (e.g., arousal, power and valence) and appraisals, rather than in terms of a small number of discrete emotion categories (e.g., happiness and sadness). This chapter aims to (i) give a brief overview of the existing efforts and the major accomplishments in modeling and analysis of emotional expressions in dimensional and continuous space while focusing on open issues and new challenges in the field, and (ii) introduce a representative approach for multimodal continuous analysis of affect from voice and face, and provide experimental results using the audiovisual Sensitive Artificial Listener (SAL) Database of natural interactions. The chapter concludes by posing a number of questions that highlight the significant issues in the field, and by extracting potential answers to these questions from the relevant literature. The chapter is organized as follows. Section 10.2 describes theories of emotion, Sect. 10.3 provides details on the affect dimensions employed in the literature as well as how emotions are perceived from visual, audio and physiological modalities. Section 10.4 summarizes how current technology has been developed, in terms of data acquisition and annotation, and automatic analysis of affect in continuous space by bringing forth a number of issues that need to be taken into account when applying a dimensional approach to emotion recognition, namely, determining the duration of emotions for automatic analysis, modeling the intensity of emotions, determining the baseline, dealing with high inter-subject expression variation, defining optimal strategies for fusion of multiple cues and modalities, and identifying appropriate machine learning techniques and evaluation measures. Section 10.5 presents our representative system that fuses vocal and facial expression cues for dimensional and continuous prediction of emotions in valence and arousal space by employing the bidirectional Long Short-Term Memory neural networks (BLSTM-NN), and introduces an output-associative fusion framework that incorporates correlations between the emotion dimensions to further improve continuous affect prediction. Section 10.6 concludes the chapter
    corecore