3,063 research outputs found

    Coordinating Cognition: The Costs and Benefits of Shared Gaze During Collaborative Search

    Get PDF
    Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one searcher seeing a gaze-cursor indicating where the other was looking, and vice versa), shared-voice (by speaking to each other), and shared-gaze-plus-voice (by using both gaze-cursors and speech). Although collaborating pairs performed better than solitary searchers, search in the shared gaze condition was best of all: twice as fast and efficient as solitary search. People can successfully communicate and coordinate their searching labor using shared gaze alone. Strikingly, shared gaze search was even faster than shared-gaze-plus-voice search; speaking incurred substantial coordination costs. We conclude that shared gaze affords a highly efficient method of coordinating parallel activity in a time-critical spatial task

    Advancing Knowledge on Situation Comprehension in Dynamic Traffic Situations by Studying Eye Movements to Empty Spatial Locations

    Full text link
    Objective: This study used the looking-at-nothing phenomenon to explore situation awareness (SA) and the effects of working memory (WM) load in driving situations. Background: While driving, people develop a mental representation of the environment. Since errors in retrieving information from this representation can have fatal consequences, it is essential for road safety to investigate this process. During retrieval, people tend to fixate spatial positions of visually encoded information, even if it is no longer available at that location. Previous research has shown that this "looking-at-nothing" behavior can be used to trace retrieval processes. Method: In a video-based laboratory experiment with 2 (WM) x 3 (SA level) within-subjects design, participants (N = 33) viewed a reduced screen and evaluated auditory statements relating to different SA levels on previously seen dynamic traffic scenarios while eye movements were recorded. Results: When retrieving information, subjects more frequently fixated emptied spatial locations associated with the information relevant for the probed SA level. The retrieval of anticipations (SA level 3) in contrast to the other SA level information resulted in more frequent gaze transitions that corresponded to the spatial dynamics of future driving behavior. Conclusion: The results support the idea that people build a visual-spatial mental image of a driving situation. Different gaze patterns when retrieving level-specific information indicate divergent retrieval processes. Application: Potential applications include developing new methodologies to assess the mental representation and SA of drivers objectively

    The effect of non-communicative eye movements on joint attention.

    Get PDF
    Eye movements provide important signals for joint attention. However, those eye movements that indicate bids for joint attention often occur among non-communicative eye movements. This study investigated the influence of these non-communicative eye movements on subsequent joint attention responsivity. Participants played an interactive game with an avatar which required both players to search for a visual target on a screen. The player who discovered the target used their eyes to initiate joint attention. We compared participants’ saccadic reaction times (SRTs) to the avatar’s joint attention bids when they were preceded by non-communicative eye movements that predicted the location of the target (Predictive Search), did not predict the location of the target (Random Search), and when there were no non-communicative eye gaze movements prior to joint attention (No Search). We also included a control condition in which participants completed the same task, but responded to a dynamic arrow stimulus instead of the avatar’s eye movements. For both eye and arrow conditions, participants had slower SRTs in Random Search trials than No Search and Predictive Search trials. However, these effects were smaller for eyes than for arrows. These data suggest that joint attention responsivity for eyes is relatively stable to the presence and predictability of spatial information conveyed by non-communicative gaze. Contrastingly, random sequences of dynamic arrows had a much more disruptive impact on subsequent responsivity compared with predictive arrow sequences. This may reflect specialised social mechanisms and expertise for selectively responding to communicative eye gaze cues during dynamic interactions, which is likely facilitated by the integration of ostensive eye contact cue

    The state of the art of diagnostic multiparty eye tracking in synchronous computer-mediated collaboration

    Get PDF
    In recent years, innovative multiparty eye tracking setups have been introduced to synchronously capture eye movements of multiple individuals engaged in computer-mediated collaboration. Despite its great potential for studying cognitive processes within groups, the method was primarily used as an interactive tool to enable and evaluate shared gaze visualizations in remote interaction. We conducted a systematic literature review to provide a comprehensive overview of what to consider when using multiparty eye tracking as a diagnostic method in experiments and how to process the collected data to compute and analyze group-level metrics. By synthesizing our findings in an integrative conceptual framework, we identified fundamental requirements for a meaningful implementation. In addition, we derived several implications for future research, as multiparty eye tracking was mainly used to study the correlation between joint attention and task performance in dyadic interaction. We found multidimensional recurrence quantification analysis, a novel method to quantify group-level dynamics in physiological data, to be a promising procedure for addressing some of the highlighted research gaps. In particular, the computation method enables scholars to investigate more complex cognitive processes within larger groups, as it scales up to multiple data streams

    Prediction of intent in robotics and multi-agent systems.

    Get PDF
    Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot-human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches

    Annotated Bibliography: Anticipation

    Get PDF
    • …
    corecore