166,005 research outputs found

    What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film

    Get PDF
    Film is ubiquitous, but the processes that guide viewers' attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles' Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers' comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers' belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the film narrative

    Eye–hand coupling is not the cause of manual return movements when searching

    Get PDF
    When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control

    Visual Fixation Durations and Saccade Amplitudes: Shifting Relationship in a Variety of Conditions

    Get PDF
    Is there any relationship between visual fixation durations and saccade amplitudes in free exploration of pictures and scenes? In four experiments with naturalistic stimuli, we compared eye movements during early and late phases of scene perception. Influences of repeated presentation of similar stimuli (Experiment 1), object density (Experiment 2), emotional stimuli (Experiment 3) and mood induction (Experiment 4) were examined. The results demonstrate a systematic increase in the durations of fixations and a decrease for saccadic amplitudes over the time course of scene perception. This relationship was very stable across the variety of studied conditions. It can be interpreted in terms of a shifting balance of the two modes of visual information processing

    Estimation of Confidence in the Dialogue based on Eye Gaze and Head Movement Information

    Get PDF
    In human-robot interaction, human mental states in dialogue have attracted attention to human-friendly robots that support educational use. Although estimating mental states using speech and visual information has been conducted, it is still challenging to estimate mental states more precisely in the educational scene. In this paper, we proposed a method to estimate human mental state based on participants’ eye gaze and head movement information. Estimated participants’ confidence levels in their answers to the miscellaneous knowledge question as a human mental state. The participants’ non-verbal information, such as eye gaze and head movements during dialog with a robot, were collected in our experiment using an eye-tracking device. Then we collect participants’ confidence levels and analyze the relationship between human mental state and non-verbal information. Furthermore, we also applied a machine learning technique to estimate participants’ confidence levels from extracted features of gaze and head movement information. As a result, the performance of a machine learning technique using gaze and head movements information achieved over 80 % accuracy in estimating confidence levels. Our research provides insight into developing a human-friendly robot considering human mental states in the dialogue

    Eye movements and scanpaths in the perception of real-world scenes

    Get PDF
    The way we move our eyes when viewing a scene is not random, but is influenced by both bottom-up (low-level), and top-down (cognitive) factors. This Thesis investigates not only what these influences are and how they effect eye movements, but more importantly how they interact with each other to guide visual perception of real-world scenes. Experiments 1 and 2 show that the sequences of fixations and saccades - ‘scanpaths’ - generated when encoding a picture are replicated both during imagery and at recognition. Higher scanpath similarities at recognition suggest that low-level visual information plays an important role in guiding eye movements, yet the above-chance similarities at imagery argue against a purely bottom-up explanation and imply a link between eye movements and visual memory. This conclusion is supported by increased scanpath similarities when previously seen pictures are described from memory (experiment 3). When visual information is available, areas of high visual saliency attract attention and are fixated sooner than less salient regions. This effect, however, is reliably reduced when viewers possess top-down knowledge about the scene in the form of domain proficiency (experiments 4-6). Enhanced memory, as well as higher scanpath similarity, for domain-specific pictures exists at recognition, and in the absence of visual information when previously seen pictures are described from memory, but not when simply imagined (experiment 6). As well as the cognitive override of bottom-up saliency, domain knowledge also moderates the influence of top-down incongruence during scene perception (experiment 7). Object-intrinsic oddities are less likely to be fixated when participants view pictures containing other domain-relevant semantic information. The finding that viewers fixate the most informative parts of a scene was extended to investigate the presence of social (people) and emotional information, both of which were found to enhance recognition memory (experiments 8 and 9). However, the lack of relationship between string similarity and accuracy, when viewing ‘people’ pictures, challenges the idea that the reproduction of eye movements alone is enough to create this memory advantage (experiment 8). It is therefore likely that the semantically informative parts of a scene play a large role in guiding eye movements and enhancing memory for a scene. The processing of emotional features occurs at a very early stage of perception (even when they are still in the parafoveal), but once fixated only emotionally negative (not positive) features hold attention (experiment 9). The presence of these emotionally negative features also reliably decreases the influence of saliency on eye movements. Lastly, experiment 10 illustrates that although the fixation sequence is important for recognition memory, the influence of visually salient and semantically relevant parafoveal cues in real-world scenes decreases the necessity to fixate in the same order. These experiments combine to conclude that eye movements are neither influenced by purely top-down nor bottom-up factors, but instead a combination of both, which interact to guide attention to the most relevant parts of the picture

    Autistic traits mediate reductions in social attention in adults with anorexia nervosa

    Get PDF
    Anorexia nervosa (AN) is associated with difficulties in social and emotional functioning. A significant proportion of individuals with AN show autistic traits, which may influence social attention. This study examined attention to faces and facial features in AN, recovered AN (REC), and healthy controls, as well as relationships with comorbid psychopathology. One hundred and forty-eight participants’ eye movements were tracked while watching a naturalistic social scene. Anxiety, depression, alexithymia, and autistic traits were assessed via self-report questionnaires. Participants with AN spent significantly less time looking at faces compared to REC and controls; patterns of attention to individual facial features did not differ across groups. Autistic traits mediated the relationship between group and time spent looking at faces

    Autistic Traits Mediate Reductions in Social Attention in Adults with Anorexia Nervosa

    Get PDF
    Anorexia nervosa (AN) is associated with difficulties in social and emotional functioning. A significant proportion of individuals with AN show autistic traits, which may influence social attention. This study examined attention to faces and facial features in AN, recovered AN (REC), and healthy controls, as well as relationships with comorbid psychopathology. One hundred and forty-eight participants’ eye movements were tracked while watching a naturalistic social scene. Anxiety, depression, alexithymia, and autistic traits were assessed via self-report questionnaires. Participants with AN spent significantly less time looking at faces compared to REC and controls; patterns of attention to individual facial features did not differ across groups. Autistic traits mediated the relationship between group and time spent looking at faces

    Eye movements and scanpaths in the perception of real-world scenes

    Get PDF
    The way we move our eyes when viewing a scene is not random, but is influenced by both bottom-up (low-level), and top-down (cognitive) factors. This Thesis investigates not only what these influences are and how they effect eye movements, but more importantly how they interact with each other to guide visual perception of real-world scenes. Experiments 1 and 2 show that the sequences of fixations and saccades - ‘scanpaths’ - generated when encoding a picture are replicated both during imagery and at recognition. Higher scanpath similarities at recognition suggest that low-level visual information plays an important role in guiding eye movements, yet the above-chance similarities at imagery argue against a purely bottom-up explanation and imply a link between eye movements and visual memory. This conclusion is supported by increased scanpath similarities when previously seen pictures are described from memory (experiment 3). When visual information is available, areas of high visual saliency attract attention and are fixated sooner than less salient regions. This effect, however, is reliably reduced when viewers possess top-down knowledge about the scene in the form of domain proficiency (experiments 4-6). Enhanced memory, as well as higher scanpath similarity, for domain-specific pictures exists at recognition, and in the absence of visual information when previously seen pictures are described from memory, but not when simply imagined (experiment 6). As well as the cognitive override of bottom-up saliency, domain knowledge also moderates the influence of top-down incongruence during scene perception (experiment 7). Object-intrinsic oddities are less likely to be fixated when participants view pictures containing other domain-relevant semantic information. The finding that viewers fixate the most informative parts of a scene was extended to investigate the presence of social (people) and emotional information, both of which were found to enhance recognition memory (experiments 8 and 9). However, the lack of relationship between string similarity and accuracy, when viewing ‘people’ pictures, challenges the idea that the reproduction of eye movements alone is enough to create this memory advantage (experiment 8). It is therefore likely that the semantically informative parts of a scene play a large role in guiding eye movements and enhancing memory for a scene. The processing of emotional features occurs at a very early stage of perception (even when they are still in the parafoveal), but once fixated only emotionally negative (not positive) features hold attention (experiment 9). The presence of these emotionally negative features also reliably decreases the influence of saliency on eye movements. Lastly, experiment 10 illustrates that although the fixation sequence is important for recognition memory, the influence of visually salient and semantically relevant parafoveal cues in real-world scenes decreases the necessity to fixate in the same order. These experiments combine to conclude that eye movements are neither influenced by purely top-down nor bottom-up factors, but instead a combination of both, which interact to guide attention to the most relevant parts of the picture
    • …
    corecore