6 research outputs found
The effect of non-communicative eye movements on joint attention.
Eye movements provide important signals for joint attention. However, those eye movements that indicate bids for joint attention often occur among non-communicative eye movements. This study investigated the influence of these non-communicative eye movements on subsequent joint attention responsivity. Participants played an interactive game with an avatar which required both players to search for a visual target on a screen. The player who discovered the target used their eyes to initiate joint attention. We compared participants’ saccadic reaction times (SRTs) to the avatar’s joint attention bids when they were preceded by non-communicative eye movements that predicted the location of the target (Predictive Search), did not predict the location of the target (Random Search), and when there were no non-communicative eye gaze movements prior to joint attention (No Search). We also included a control condition in which participants completed the same task, but responded to a dynamic arrow stimulus instead of the avatar’s eye movements. For both eye and arrow conditions, participants had slower SRTs in Random Search trials than No Search and Predictive Search trials. However, these effects were smaller for eyes than for arrows. These data suggest that joint attention responsivity for eyes is relatively stable to the presence and predictability of spatial information conveyed by non-communicative gaze. Contrastingly, random sequences of dynamic arrows had a much more disruptive impact on subsequent responsivity compared with predictive arrow sequences. This may reflect specialised social mechanisms and expertise for selectively responding to communicative eye gaze cues during dynamic interactions, which is likely facilitated by the integration of ostensive eye contact cue
Evidence for the adaptive parsing of non-communicative eye movements during joint attention interactions
During social interactions, the ability to detect and respond to gaze-based joint attention bids often involves the evaluation of non-communicative eye movements. However, very little is known about how much humans are able to track and parse spatial information from these non-communicative eye movements over time, and the extent to which this influences joint attention outcomes. This was investigated in the current study using an interactive computer-based joint attention game. Using a fully within-subjects design, we specifically examined whether participants were quicker to respond to communicative joint attention bids that followed predictive, as opposed to random or no, non-communicative gaze behaviour. Our results suggest that in complex, dynamic tasks, people adaptively use and dismiss non-communicative gaze information depending on whether it informs the locus of an upcoming joint attention bid. We also went further to examine the extent to which this ability to track dynamic spatial information was specific to processing gaze information. This was achieved by comparing performance to a closely matched non-social task where eye gaze cues were replaced with dynamic arrow stimuli. Whilst we found that people are also able to track and use dynamic non-social information from arrows, there was clear evidence for a relative advantage for tracking gaze cues during social interactions. The implications of these findings for social neuroscience and autism research are discussed
Evidence for the Adaptive Parsing of Non-Communicative Eye Movements during Joint Attention Interactions
Recommended from our members
Predicting intentions: How do we predict other's action intentions?
Intention prediction often plays a crucial role in successful social interaction. Previous studies have attempted to understand this skill by focusing on the role of movement kinematics in isolation. However, this approach is limited as the same kinematics typically map to multiple action possibilities (affordances) and as a result, individual's also employ contextual information to predict others' intentions. In this study we present preliminary findings from a qualitative study aimed at investigating intention prediction in naturalistic contexts. Participants viewed an individual reaching for a cup with one of two object-directed intentions: to drink OR to clear the table. A third non-object-directed intention was also included where the observed individual placed their hand on the table next to the cup. For each intention the contextual information was varied by changing the environmental scene between (1) cups full of juice, (2) almost empty cups, and (3) half-empty cups. The findings reveal that participants perceived the cup's functional (most salient) affordance - drink - for the intention so far as the movement kinematics specified an object-directed intention (drink or clear) with a context that clearly afforded it (full and half-full cups). However, participants were also sensitive to the kinematic differences between the object-directed intentions when the context made the functional affordance seem improbable (almost empty cups)