96,038 research outputs found

    Social cognition and robotics

    Get PDF
    In our social world we continuously display nonverbal behavior during interaction. Particularly, when meeting for the first time we use these implicit signals to form judgments about each other, which is a cornerstone of cooperation and societal cohesion. The aim of the studies presented here was to examine which gaze patterns as well as other types of nonverbal signals, such as facial expressions, gestures and kinesics are presented during interaction, which signals are preferred, and which signals we base our social judgment on. Furthermore, it was investigated whether cultural context, of German or Japanese culture, influences these interaction and decision making patterns. One part of the following dissertation concerned itself mainly with gaze behavior as it is one of the most important tools humans use to function in the natural world. It allows monitoring the environment as well as signalling towards others. Thus, measuring whether attentional resources are captured by examining potential gaze following in reaction to pointing gestures and gaze shifts of an interaction partner was one of the goals of this dissertation. However, also intercultural differences in gaze reaction towards direct gaze during various types of interaction were examined. For that purpose, a real-world dyadic interaction scenario in combination with a mobile eyetracker was used. Evidence of gaze patterns suggested that independent of culture interactants seem to mostly ignore irrelevant directional cues and instead remain focused on the face of a conversation partner, at least while listening to said partner. This was a pattern also repeated when no displays of directional signals were performed. While speaking, on the other hand, interactants from Japan seem to change their behaviour, in contrast to interactants from Germany, as they avert their gaze away from the face, which may be attributed to cultural norms. As correct assessment of another person is a critical skill for humans to possess the second part of the presented dissertation investigated on which basis humans make these social decisions. Specifically, nonverbal signals of trustworthiness and potential cooperativeness in Germany and in Japan were of interest. Thus, in one study a mobile eyetracker was used to investigate intercultural differences in gaze patterns during the social judgment process of a small number of sequentially presented potential cooperation partner. In another study participants viewed video stimuli of faces, bodies and faces + bodies of potential cooperation partner to examine the basis of social decision making in more detail and also to explore a wider variety of nonverbal behaviours in a more controlled manner. Results indicated that while judging presenters on trustworthiness based on displayed nonverbal cues German participants seem to partly look away from the face and examine the body. This is behavior in contrast to Japanese participants who seem to remain fixated mostly on the face. Furthermore, it was shown that body motion may be of particular importance for social judgment and that body motion of one’s own culture as opposed to a different culture seems to be preferred. Lastly, nonverbal signals as a basis of decision making were explored in more detail by examining the preferred interaction partner’s behaviour presented as video stimuli. In recent years and presumably also in the future, the human social environment has been growing to include new types of interactants, such as robots. To therefore ensure a smooth interaction, robots need to be adjusted according to human social expectation, including their nonverbal behavior. That is one of the reasons why all results presented here were not only put in the context of human interaction and judgment, but also viewed in the context of human-robot interaction

    Social cognition and robotics

    Get PDF
    In our social world we continuously display nonverbal behavior during interaction. Particularly, when meeting for the first time we use these implicit signals to form judgments about each other, which is a cornerstone of cooperation and societal cohesion. The aim of the studies presented here was to examine which gaze patterns as well as other types of nonverbal signals, such as facial expressions, gestures and kinesics are presented during interaction, which signals are preferred, and which signals we base our social judgment on. Furthermore, it was investigated whether cultural context, of German or Japanese culture, influences these interaction and decision making patterns. One part of the following dissertation concerned itself mainly with gaze behavior as it is one of the most important tools humans use to function in the natural world. It allows monitoring the environment as well as signalling towards others. Thus, measuring whether attentional resources are captured by examining potential gaze following in reaction to pointing gestures and gaze shifts of an interaction partner was one of the goals of this dissertation. However, also intercultural differences in gaze reaction towards direct gaze during various types of interaction were examined. For that purpose, a real-world dyadic interaction scenario in combination with a mobile eyetracker was used. Evidence of gaze patterns suggested that independent of culture interactants seem to mostly ignore irrelevant directional cues and instead remain focused on the face of a conversation partner, at least while listening to said partner. This was a pattern also repeated when no displays of directional signals were performed. While speaking, on the other hand, interactants from Japan seem to change their behaviour, in contrast to interactants from Germany, as they avert their gaze away from the face, which may be attributed to cultural norms. As correct assessment of another person is a critical skill for humans to possess the second part of the presented dissertation investigated on which basis humans make these social decisions. Specifically, nonverbal signals of trustworthiness and potential cooperativeness in Germany and in Japan were of interest. Thus, in one study a mobile eyetracker was used to investigate intercultural differences in gaze patterns during the social judgment process of a small number of sequentially presented potential cooperation partner. In another study participants viewed video stimuli of faces, bodies and faces + bodies of potential cooperation partner to examine the basis of social decision making in more detail and also to explore a wider variety of nonverbal behaviours in a more controlled manner. Results indicated that while judging presenters on trustworthiness based on displayed nonverbal cues German participants seem to partly look away from the face and examine the body. This is behavior in contrast to Japanese participants who seem to remain fixated mostly on the face. Furthermore, it was shown that body motion may be of particular importance for social judgment and that body motion of one’s own culture as opposed to a different culture seems to be preferred. Lastly, nonverbal signals as a basis of decision making were explored in more detail by examining the preferred interaction partner’s behaviour presented as video stimuli. In recent years and presumably also in the future, the human social environment has been growing to include new types of interactants, such as robots. To therefore ensure a smooth interaction, robots need to be adjusted according to human social expectation, including their nonverbal behavior. That is one of the reasons why all results presented here were not only put in the context of human interaction and judgment, but also viewed in the context of human-robot interaction

    Eye movements for learned faces

    Get PDF
    Humans demonstrate a perceptual specialization for faces that is astonishing. This project attempts determine if and where within the perceptual process face perception and face recognition diverge at the level of eye movement behaviors. Participants were exposed to a series of 36 faces, of which six were randomly selected to be learned over five subsequent exposures; thus the same face identities served as both the novel faces (block 1) and the learned faces (block 5), allowing for the measurement of eye gaze patterns during initial face perception (novel) and face recognition (learned). These six faces were randomly assigned to different orders within five presentation blocks along with 30 interspersed novel distractor faces (six novel faces per block). Eye movement patterns were recorded using the Gazepoint eye tracker and measured in the form of fixation duration and number of fixations for a set of regions of interest (ROIs). A linear mixed effects model was run for both fixation duration and number of fixations accounting for the potential effects and interaction of ROI and familiarity (i.e., face perception vs face recognition). It was determined that participants spent more time and looked the most often at the eyes of the faces they viewed (more so than any other ROI) regardless of their level of familiarity with the face. This suggests that while novel and familiar faces may be processed in overlapping but distinct manners, the way people visually scan a face may not differ for the processes of face perception and face recognition

    Maternal oxytocin response predicts mother-to-infant gaze

    Get PDF
    The neuropeptide oxytocin is importantly implicated in the emergence and maintenance of maternal behavior that forms the basis of the mother-infant bond. However, no research has yet examined the specific association between maternal oxytocin and maternal gaze, a key modality through which the mother makes social contact and engages with her infant. Furthermore, prior oxytocin studies have assessed maternal engagement primarily during episodes free of infant distress, while maternal engagement during infant distress is considered to be uniquely relevant to the formation of secure mother-infant attachment. Two patterns of maternal gaze, maternal gaze toward and gaze shifts away from the infant, were micro-coded while 50 mothers interacted with their 7-month-old infants during a modified still-face procedure. Maternal oxytocin response was defined as a change in the mother's plasma oxytocin level following interaction with her infant as compared to baseline. The mother's oxytocin response was positively associated with the duration of time her gaze was directed toward her infant, while negatively associated with the frequency with which her gaze shifted away from her infant. Importantly, mothers who showed low/average oxytocin response demonstrated a significant decrease in their gaze toward their infants during periods of infant distress, while such change was not observed in mothers with high oxytocin response. The findings underscore the involvement of oxytocin in regulating the mother's responsive engagement with her infant, particularly in times when the infant's need for access to the mother is greatest

    EFFECTS OF PARTICIPANT’S ROLE AND NARRATIVE TOPIC ON VISUAL ATTENTION IN ADULTS WITH AUTISM DURING A STRUCTURED INTERACTION

    Get PDF
    Individuals with autism spectrum disorders (ASD) have well-documented difficulties on face perception tasks. Although visual attention has been examined to clarify the nature face processing in ASD, there is no consensus among research concerning how visual attention differs in individuals with ASD, and less is known about how individuals with ASD attend to faces during interactions. The current study used a novel method to simulate a video-mediated interaction and thereby examine the effects of group (autism, control), participant’s role (listening, responding), and topic demands (cognitive and social) on visual attention during an interactive context. Nineteen male adults with ASD and 19 male typically developing (TD) adults, matched on age and measures of IQ, completed a task that involved alternating between responding to narrative topics and listening to their partner (a confederate) respond to the same topics. Unbeknownst to participants, prerecorded videos were shown instead of a live-video feed and eye movements were recorded. Additional analyses examined the effects of stimulus type, individual differences, and temporal-specific differences in group viewing proportions. Overall, patterns of visual attention were similar for participants with and without ASD, indicating that top-down factors moderate gaze in ASD. When between-group differences were identified, the majority of differences revealed reduced attention to facial regions or attenuated shifts in gaze in the autism group, but not atypicalities in the overall patterns of gaze. However, results indicated that both the distribution of attention to facial features and the extent of between-group differences in gaze differed depending on whether static or dynamic faces were viewed. In addition, reduced gaze to the face during the listening condition and reduced overall gaze to the nose distinguished the autism group from the control group. Participant characteristics (i.e., social anxiety, social skills participation) and contextual factors (i.e., emotional, dense, or disfluent speech) associated with within-group and between-group variability were also identified. Findings highlight the importance of examining visual attention using ecologically-valid designs in order to conceptualize face processing in ASD. Possible explanations for group differences in gaze to the nose, rather than the eyes or the mouth, are discussed

    Graphical models for social behavior modeling in face-to face interaction

    No full text
    International audienceThe goal of this paper is to model the coverbal behavior of a subject involved in face-to-face social interactions. For this end, we present a multimodal behavioral model based on a Dynamic Bayesian Network (DBN). The model was inferred from multimodal data of interacting dyads in a specific scenario designed to foster mutual attention and multimodal deixis of objects and places in a collaborative task. The challenge for this behavioral model is to generate coverbal actions (gaze, hand gestures) for the subject given his verbal productions, the current phase of the interaction and the perceived actions of the partner. In our work, the structure of the DBN was learned from data, which revealed an interesting causality graph describing precisely how verbal and coverbal human behaviors are coordinated during the studied interactions. Using this structure, DBN exhibits better performances compared to classical baseline models such as Hidden Markov Models (HMMs) and Hidden Semi-Markov Models (HSMMs). We outperform the baseline in both measures of performance, i.e. interaction unit recognition and behavior generation. DBN also reproduces more faithfully the coordination patterns between modalities observed in ground truth compared to the baseline models

    Infant Gaze During Mother-Infant Face-to-Face Interaction

    Get PDF
    Numerous research studies have examined mother-infant face-to-face interactions. A common goal of these studies has been to identify the characteristics of both the mother and the infant that affect social interaction. One area of theory and research has focused specifically on the communication patterns that develop in the mother-infant dyad; both the verbal and nonverbal aspects of this relationship have been investigated. Past research studies have also demonstrated that nearly one third of early parent-infant interactions can be considered play which is defined as social interactions that occur when the infant is alert and the caregiver's needs have been met (Stern, 1974; Field 1979). Based on this definition, play represents a unique set of social interactions that may vary among individuals and as a result contribute to individual differences in social development. Roggman and Peery (1989) suggest that patterns of parent-infant play may develop very early and are specific to the gender of both the infant and the parent. If this is true, then some gender differences which occur later in a child's development may begin in infancy during these early parent-infant interactions. As a result, gender-related variation in parent infant interaction may contribute to the beginning of differential socialization for males and females (Roggman & Peery, 1989). The main purpose of the current research was to improve understanding of early social interactions, specifically mother-infant play interactions during the first six months of life. Past research has suggested that gender affects infant completion of studies involving changes in a mother's pace of interaction, but has failed to examine the relationship of age in conjunction with gender (Burlie, 1992). In addition, infant gender has been shown to alter the gaze behaviors of mothers (Roggman & Peery, t989). Stem (1974) stated that gaze behavior is important in maternal satisfaction with play interactions. Therefore, the specific purpose of this research is to analyze infant gaze behaviors during mother-infant face-to-face interaction, looking specifically at the effects of infant gender, infant age, and change in maternal pacing on infant gaze behaviors. Infant gaze differs for boys and girls. In normal play situations, three- and four month- old girls gazed longer at their mothers than boys did (Fogel, Toda & Kawai, 1988; Roggman & Peery, 1989). This research indicates that females are more attentive during normal play than males. However, a study done by Tronick & Cohn (1989) found that sons are more likely than daughters to match behavior states with their mothers. They also reported that sons are more synchronized with their mothers than daughters at six and nine months. Although, the data also suggest that daughters are synchronized at three months (Tronick & Cohn, 1989). These results are supported by Burlie's 1992 study in which three-month-old females were unable to return to normal play behavior once the mother slowed down her play behavior and as a result broke the synchrony of the interaction. The present study investigated how gaze behaviors vary across gender, and how these differences vary with the age of the infant Research shows that the amount of time infants spend gazing at their mothers changes with age. Gaze behaviors usually increase from birth up until three or four months of age when the behavior begins to decrease. At six months the infant's focal environment increases to include objects which decreases the amount of time that the infant spends gazing at the mother (Cohn & Tronick, 1987; Stack & Muir, 1990). As a result, sixmonth- olds may not be as sensitive to changes in their mother's behavior. The present study will examine gaze behaviors in infants who are one to six months of age. Previous studies on infant gaze behaviors have not focused on the 1 to 2 month age range. As a result, this study will provide insight into some of the earliest gaze behaviors as well as the more complex gaze behaviors which develop along with the infant's visual-motor system

    Controlling the Gaze of Conversational Agents

    Get PDF
    We report on a pilot experiment that investigated the effects of different eye gaze behaviours of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of human-like behaviour with\ud two other versions. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to be typed in by the users\ud of the systems. Despite this restriction we found that participants that conversed with the agent that behaved according to the human-like patterns appreciated the agent better than participants that conversed with the other agents. Conversations with the optimal version also\ud proceeded more efficiently. Participants needed less time to complete their task

    Social interactions through the eyes of macaques and humans

    Get PDF
    Group-living primates frequently interact with each other to maintain social bonds as well as to compete for valuable resources. Observing such social interactions between group members provides individuals with essential information (e.g. on the fighting ability or altruistic attitude of group companions) to guide their social tactics and choice of social partners. This process requires individuals to selectively attend to the most informative content within a social scene. It is unclear how non-human primates allocate attention to social interactions in different contexts, and whether they share similar patterns of social attention to humans. Here we compared the gaze behaviour of rhesus macaques and humans when free-viewing the same set of naturalistic images. The images contained positive or negative social interactions between two conspecifics of different phylogenetic distance from the observer; i.e. affiliation or aggression exchanged by two humans, rhesus macaques, Barbary macaques, baboons or lions. Monkeys directed a variable amount of gaze at the two conspecific individuals in the images according to their roles in the interaction (i.e. giver or receiver of affiliation/aggression). Their gaze distribution to non-conspecific individuals was systematically varied according to the viewed species and the nature of interactions, suggesting a contribution of both prior experience and innate bias in guiding social attention. Furthermore, the monkeys’ gaze behavior was qualitatively similar to that of humans, especially when viewing negative interactions. Detailed analysis revealed that both species directed more gaze at the face than the body region when inspecting individuals, and attended more to the body region in negative than in positive social interactions. Our study suggests that monkeys and humans share a similar pattern of role-sensitive, species- and context-dependent social attention, implying a homologous cognitive mechanism of social attention between rhesus macaques and humans

    Human spontaneous gaze patterns in viewing of faces of different species

    Get PDF
    Human studies have reported clear differences in perceptual and neural processing of faces of different species, implying the contribution of visual experience to face perception. Can these differences be manifested in our eye scanning patterns while extracting salient facial information? Here we systematically compared non-pet owners’ gaze patterns while exploring human, monkey, dog and cat faces in a passive viewing task. Our analysis revealed that the faces of different species induced similar patterns of fixation distribution between left and right hemi-face, and among key local facial features with the eyes attracting the highest proportion of fixations and viewing times, followed by the nose and then the mouth. Only the proportion of fixation directed at the mouth region was species-dependent and could be differentiated at the earliest stage of face viewing. It seems that our spontaneous eye scanning patterns associated with face exploration were mainly constrained by general facial configurations; the species affiliation of the inspected faces had limited impact on gaze allocation, at least under free viewing conditions
    • 

    corecore