89 research outputs found

    Dynamic, Task-Related and Demand-Driven Scene Representation

    Get PDF
    Humans selectively process and store details about the vicinity based on their knowledge about the scene, the world and their current task. In doing so, only those pieces of information are extracted from the visual scene that is required for solving a given task. In this paper, we present a flexible system architecture along with a control mechanism that allows for a task-dependent representation of a visual scene. Contrary to existing approaches, our system is able to acquire information selectively according to the demands of the given task and based on the system’s knowledge. The proposed control mechanism decides which properties need to be extracted and how the independent processing modules should be combined, based on the knowledge stored in the system’s long-term memory. Additionally, it ensures that algorithmic dependencies between processing modules are resolved automatically, utilizing procedural knowledge which is also stored in the long-term memory. By evaluating a proof-of-concept implementation on a real-world table scene, we show that, while solving the given task, the amount of data processed and stored by the system is considerably lower compared to processing regimes used in state-of-the-art systems. Furthermore, our system only acquires and stores the minimal set of information that is relevant for solving the given task

    Age-related decline of peripheral visual processing: the role of eye movements

    Get PDF
    Earlier work suggests that the area of space from which useful visual information can be extracted (useful field of view, UFoV) shrinks in old age. We investigated whether this shrinkage, documented previously with a visual search task, extends to a bimanual tracking task. Young and elderly subjects executed two concurrent tracking tasks with their right and left arms. The separation between tracking displays varied from 3 to 35 cm. Subjects were asked to fixate straight ahead (condition FIX) or were free to move their eyes (condition FREE). Eye position was registered. In FREE, young subjects tracked equally well at all display separations. Elderly subjects produced higher tracking errors, and the difference between age groups increased with display separation. Eye movements were comparable across age groups. In FIX, elderly and young subjects tracked less well at large display separations. Seniors again produced higher tracking errors in FIX, but the difference between age groups did not increase reliably with display separation. However, older subjects produced a substantial number of illicit saccades, and when the effect of those saccades was factored out, the difference between young and older subjects’ tracking did increase significantly with display separation in FIX. We conclude that the age-related shrinkage of UFoV, previously documented with a visual search task, is observable with a manual tracking task as well. Older subjects seem to partly compensate their deficit by illicit saccades. Since the deficit is similar in both conditions, it may be located downstream from the convergence of retinal and oculomotor signals

    Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets

    Get PDF
    Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively

    Psychomotor control in a virtual laparoscopic surgery training environment: gaze control parameters differentiate novices from experts

    Get PDF
    Background: Surgical simulation is increasingly used to facilitate the adoption of technical skills during surgical training. This study sought to determine if gaze control parameters could differentiate between the visual control of experienced and novice operators performing an eye-hand coordination task on a virtual reality laparoscopic surgical simulator (LAP Mentor™). Typically adopted hand movement metrics reflect only one half of the eye-hand coordination relationship; therefore, little is known about how hand movements are guided and controlled by vision. Methods: A total of 14 right-handed surgeons were categorised as being either experienced (having led more than 70 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The eight experienced and six novice surgeons completed the eye-hand coordination task from the LAP Mentor basic skills package while wearing a gaze registration system. A variety of performance, movement, and gaze parameters were recorded and compared between groups. Results: The experienced surgeons completed the task significantly more quickly than the novices, but only the economy of movement of the left tool differentiated skill level from the LAP Mentor parameters. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. Conclusion: The findings of the study provide support for the utility of assessing strategic gaze behaviour to better understand the way in which surgeons utilise visual information to plan and control tool movements in a virtual reality laparoscopic environment. It is hoped that by better understanding the limitations of the psychomotor system, effective gaze training programs may be developed. © 2010 The Author(s).published_or_final_versionSpringer Open Choice, 01 Dec 201

    Looking to Score: The Dissociation of Goal Influence on Eye Movement and Meta-Attentional Allocation in a Complex Dynamic Natural Scene

    Get PDF
    Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers’ beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer’s beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior

    Mind your step: the effects of mobile phone use on gaze behavior in stair climbing

    Get PDF
    Stair walking is a hazardous activity and a common cause of fatal and non-fatal falls. Previous studies have assessed the role of eye movements in stair walking by asking people to repeatedly go up and down stairs in quiet and controlled conditions, while the role of peripheral vision was examined by giving participants specific fixation instructions or working memory tasks. We here extend this research to stair walking in a natural environment with other people present on the stairs and a now common secondary task: Using one's mobile phone. Results show that using the mobile phone strongly draws one's attention away from the stairs, but that the distribution of gaze locations away from the phone is little influenced by using one's phone. Phone use also increased the time needed to walk the stairs, but handrail use remained low. These results indicate that limited foveal vision suffices for adequate stair walking in normal environments, but that mobile phone use has a strong influence on attention, which may pose problems when unexpected obstacles are encountered

    Force-Field Compensation in a Manual Tracking Task

    Get PDF
    This study addresses force/movement control in a dynamic “hybrid” task: the master sub-task is continuous manual tracking of a target moving along an eight-shaped Lissajous figure, with the tracking error as the primary performance index; the slave sub-task is compensation of a disturbing curl viscous field, compatibly with the primary performance index. The two sub-tasks are correlated because the lateral force the subject must exert on the eight-shape must be proportional to the longitudinal movement speed in order to perform a good tracking. The results confirm that visuo-manual tracking is characterized by an intermittent control mechanism, in agreement with previous work; the novel finding is that the overall control patterns are not altered by the presence of a large deviating force field, if compared with the undisturbed condition. It is also found that the control of interaction-forces is achieved by a combination of arm stiffness properties and direct force control, as suggested by the systematic lateral deviation of the trajectories from the nominal path and the comparison between perturbed trials and catch trials. The coordination of the two sub-tasks is quickly learnt after the activation of the deviating force field and is achieved by a combination of force and the stiffness components (about 80% vs. 20%), which is a function of the implicit accuracy of the tracking task

    Gaze following is modulated by expectations regarding others’ action goals

    Get PDF
    Humans attend to social cues in order to understand and predict others' behavior. Facial expressions and gaze direction provide valuable information to infer others' mental states and intentions. The present study examined the mechanism of gaze following in the context of participants' expectations about successive action steps of an observed actor. We embedded a gaze-cueing manipulation within an action scenario consisting of a sequence of naturalistic photographs. Gaze-induced orienting of attention (gaze following) was analyzed with respect to whether the gaze behavior of the observed actor was in line or not with the action-related expectations of participants (i.e., whether the actor gazed at an object that was congruent or incongruent with an overarching action goal). In Experiment 1, participants followed the gaze of the observed agent, though the gaze-cueing effect was larger when the actor looked at an action-congruent object relative to an incongruent object. Experiment 2 examined whether the pattern of effects observed in Experiment 1 was due to covert, rather than overt, attentional orienting, by requiring participants to maintain eye fixation throughout the sequence of critical photographs (corroborated by monitoring eye movements). The essential pattern of results of Experiment 1 was replicated, with the gaze-cueing effect being completely eliminated when the observed agent gazed at an action-incongruent object. Thus, our findings show that covert gaze following can be modulated by expectations that humans hold regarding successive steps of the action performed by an observed agent
    corecore