225 research outputs found

    Intelligent Object Exploration

    Get PDF

    生命維持にかかわる生理現象を介した人間 : ロボットのコミュニケーションと身体情動モデルの設計

    Get PDF
    関西大学In this dissertation, we focus on physiological phenomena of robots as the expressive modality of their inner states and discuss the effectiveness of a robot expressing physiological phenomena, which are indispensable for living. We designed a body-emotion model showing the relationship between a) emotion as the inner state of the robot and b) physiological phenomena as physical changes, and we discuss the communication between humans and robots through involuntary physiological expression based on the model. In recent years, various robots for use in mental health care and communication support in medical/nursing care have been developed. The purpose of these systems is to enable communication between a robot and patients by an active approach of the robot through sound and body movement. In contrast to conventional approaches, our research is based on involuntary emotional expression through physiological phenomena of the robot. Physiological phenomena including breathing, heartbeat, and body temperature are essential functions for life activities, and these are closely related to the inner state of humans because physiological phenomena are caused by the emotional reaction of the limbic system transmitted via the autonomic nervous system. In human-robot communication through physical contact, we consider that physiological phenomena are one of the most important nonverbal modalities of the inner state as involuntary expressions. First, we focused on the robots\u27 expression of physiological phenomena, proposed the body-emotion model (BEM), which concerns the relationship between the inner state of robots and their involuntary physical reactions. We proposed a stuffed-toy robot system: BREAR―which has a mechanical structure to express the breathing, heartbeat, temperature and bodily movement. The result of experiment showed that a heartbeat, breathing and body temperature can express the robot\u27s living state and that the breathing speed is highly related to the robot\u27s emotion of arousal. We reviewed the experimental results and emotional generation mechanisms and discussed the design of the robot based on BEM. Based on our verification results, we determined that the design of the BEM-which involves the perception of the external situation, the matching with the memory, the change of the autonomic nervous parameter and the representation of the physiological phenomena - that is based on the relationship between the autonomic nervous system and emotional arousal is effective. Second, we discussed indirect communication between humans and robots through physiological phenomena - which consist of the breathing, heartbeats and body temperature - that express robots\u27 emotions. We set a situation with joint attention from the robot and user on emotional content and evaluated whether both the user\u27s emotional response to the content and the user\u27s impression of relationship between the user and the robot were changed by the physiological expressions of the robot. The results suggest that the physiological expression of the robot makes the user\u27s own emotions in the experience more excited or suppressed and that the robot\u27s expression increases impressions of closeness and sensitivity. Last, we discussed the future perspective of human-robot communication by physiological phenomena. Regarding the representation of the robots\u27 sense of life, it is thought that the user\u27s recognition that the robot is alive improves not only the moral effect on the understanding of the finiteness of life but also the attachment to the robot in long-term communication. Regarding the emotional expression mechanism based on life, it is expected that the robot can display a complicated internal state close to that of humans by combining intentionally expressed emotions and involuntary emotional expressions. If a robot can express a combination of realistic voluntary expressions, such as facial expressions and body movements, in combination with real involuntary expressions by using the real intentions and lying, it can be said that the robot has a more complicated internal state than that of a pet. By using a robot expressing a living state through physiological phenomena, it can be expected that the effect of mental care will exceed that of animal therapy, and we expect to provide care and welfare support in place of human beings

    Developmental learning of internal models for robotics

    No full text
    Abstract: Robots that operate in human environments can learn motor skills asocially, from selfexploration, or socially, from imitating their peers. A robot capable of doing both can be more ~daptiveand autonomous. Learning by imitation, however, requires the ability to understand the actions ofothers in terms ofyour own motor system: this information can come from a robot's own exploration. This thesis investigates the minimal requirements for a robotic system than learns from both self-exploration and imitation of others. .Through self.exploration and computer vision techniques, a robot can develop forward 'models: internal mo'dels of its own motor system that enable it to predict the consequences of its actions. Multiple forward models are learnt that give the robot a distributed, causal representation of its motor system. It is demon~trated how a controlled increase in the complexity of these forward models speeds up the robot's learning. The robot can determine the uncertainty of its forward models, enabling it to explore so as to improve the accuracy of its???????predictions. Paying attention fO the forward models according to how their uncertainty is changing leads to a development in the robot's exploration: its interventions focus on increasingly difficult situations, adapting to the complexity of its motor system. A robot can invert forward models, creating inverse models, in order to estimate the actions that will achieve a desired goal. Switching to socialleaming. the robot uses these inverse model~ to imitate both a demonstrator's gestures and the underlying goals of their movement.Imperial Users onl

    Shared perception is different from individual perception: a new look on context dependency

    Full text link
    Human perception is based on unconscious inference, where sensory input integrates with prior information. This phenomenon, known as context dependency, helps in facing the uncertainty of the external world with predictions built upon previous experience. On the other hand, human perceptual processes are inherently shaped by social interactions. However, how the mechanisms of context dependency are affected is to date unknown. If using previous experience - priors - is beneficial in individual settings, it could represent a problem in social scenarios where other agents might not have the same priors, causing a perceptual misalignment on the shared environment. The present study addresses this question. We studied context dependency in an interactive setting with a humanoid robot iCub that acted as a stimuli demonstrator. Participants reproduced the lengths shown by the robot in two conditions: one with iCub behaving socially and another with iCub acting as a mechanical arm. The different behavior of the robot significantly affected the use of prior in perception. Moreover, the social robot positively impacted perceptual performances by enhancing accuracy and reducing participants overall perceptual errors. Finally, the observed phenomenon has been modelled following a Bayesian approach to deepen and explore a new concept of shared perception.Comment: 14 pages, 9 figures, 1 table. IEEE Transactions on Cognitive and Developmental Systems, 202

    Gaze-based interaction for effective tutoring with social robots

    Get PDF

    Gaze-based interaction for effective tutoring with social robots

    Get PDF

    Drama, a connectionist model for robot learning: experiments on grounding communication through imitation in autonomous robots

    Get PDF
    The present dissertation addresses problems related to robot learning from demonstra¬ tion. It presents the building of a connectionist architecture, which provides the robot with the necessary cognitive and behavioural mechanisms for learning a synthetic lan¬ guage taught by an external teacher agent. This thesis considers three main issues: 1) learning of spatio-temporal invariance in a dynamic noisy environment, 2) symbol grounding of a robot's actions and perceptions, 3) development of a common symbolic representation of the world by heterogeneous agents.We build our approach on the assumption that grounding of symbolic communication creates constraints not only on the cognitive capabilities of the agent but also and especially on its behavioural capacities. Behavioural skills, such as imitation, which allow the agent to co-ordinate its actionn to that of the teacher agent, are required aside to general cognitive abilities of associativity, in order to constrain the agent's attention to making relevant perceptions, onto which it grounds the teacher agent's symbolic expression. In addition, the agent should be provided with the cognitive capacity for extracting spatial and temporal invariance in the continuous flow of its perceptions. Based on this requirement, we develop a connectionist architecture for learning time series. The model is a Dynamical Recurrent Associative Memory Architecture, called DRAMA. It is a fully connected recurrent neural network using Hebbian update rules. Learning is dynamic and unsupervised. The performance of the architecture is analysed theoretically, through numerical simulations and through physical and simulated robotic experiments. Training of the network is computationally fast and inexpensive, which allows its implementation for real time computation and on-line learning in a inexpensive hardware system. Robotic experiments are carried out with different learning tasks involving recognition of spatial and temporal invariance, namely landmark recognition and prediction of perception-action sequence in maze travelling.The architecture is applied to experiments on robot learning by imitation. A learner robot is taught by a teacher agent, a human instructor and another robot, a vocabulary to describe its perceptions and actions. The experiments are based on an imitative strategy, whereby the learner robot reproduces the teacher's actions. While imitating the teacher's movements, the learner robot makes similar proprio and exteroceptions to those of the teacher. The learner robot grounds the teacher's words onto the set of common perceptions they share. We carry out experiments in simulated and physical environments, using different robotic set-ups, increasing gradually the complexity of the task. In a first set of experiments, we study transmission of a vocabulary to designate actions and perception of a robot. Further, we carry out simulation studies, in which we investigate transmission and use of the vocabulary among a group of robotic agents. In a third set of experiments, we investigate learning sequences of the robot's perceptions, while wandering in a physically constrained environment. Finally, we present the implementation of DRAMA in Robota, a doll-like robot, which can imitate the arms and head movements of a human instructor. Through this imitative game, Robota is taught to perform and label dance patterns. Further, Robota is taught a basic language, including a lexicon and syntactical rules for the combination of words of the lexicon, to describe its actions and perception of touch onto its body

    AI Governance Through a Transparency Lens

    Get PDF

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
    corecore