28 research outputs found

    Semantic content outweighs low-level saliency in determining children's and adults' fixation of movies

    Get PDF
    To make sense of the visual world, we need to move our eyes to focus regions of interest on the high-resolution fovea. Eye movements, therefore, give us a way to infer mechanisms of visual processing and attention allocation. Here, we examined age-related differences in visual processing by recording eye movements from 37 children (aged 6–14 years) and 10 adults while viewing three 5-min dynamic video clips taken from child-friendly movies. The data were analyzed in two complementary ways: (a) gaze based and (b) content based. First, similarity of scanpaths within and across age groups was examined using three different measures of variance (dispersion, clusters, and distance from center). Second, content-based models of fixation were compared to determine which of these provided the best account of our dynamic data. We found that the variance in eye movements decreased as a function of age, suggesting common attentional orienting. Comparison of the different models revealed that a model that relies on faces generally performed better than the other models tested, even for the youngest age group (<10 years). However, the best predictor of a given participant’s eye movements was the average of all other participants’ eye movements both within the same age group and in different age groups. These findings have implications for understanding how children attend to visual information and highlight similarities in viewing strategies across development

    Picture this:A review of research relating to narrative processing by moving image versus language

    Get PDF
    Reading fiction for pleasure is robustly correlated with improved cognitive attainment and other benefits. It is also in decline among young people in developed nations, in part because of competition from moving image fiction. We review existing research on the differences between reading or hearing verbal fiction and watching moving image fiction, as well as looking more broadly at research on image or text interactions and visual versus verbal processing. We conclude that verbal narrative generates more diverse responses than moving image narrative. We note that reading and viewing narrative are different tasks, with different cognitive loads. Viewing moving image narrative mostly involves visual processing with some working memory engagement, whereas reading narrative involves verbal processing, visual imagery, and personal memory (Xu et al., 2005). Attempts to compare the two by creating equivalent stimuli and task demands face a number of challenges. We discuss the difficulties of such comparative approaches. We then investigate the possibility of identifying lower level processing mechanisms that might distinguish cognition of the two media and propose internal scene construction and working memory as foci for future research. Although many of the sources we draw on concentrate on English-speaking participants in European or North American settings, we also cover material relating to speakers of Dutch, German, Hebrew, and Japanese in their respective countries, and studies of a remote Turkish mountain community

    A catch-up illusion arising from a distance-dependent perception bias in judging relative movement

    No full text
    The perception of relative target movement from a dynamic observer is an unexamined psychological three body problem. To test the applicability of explanations for two moving bodies participants repeatedly judged the relative movements of two runners chasing each other in video clips displayed on a stationary screen. The chased person always ran at 3 m/s with an observer camera following or leading at 4.5, 3, 1.5 or 0 m/s. We harmonized the chaser speed in an adaptive staircase to determine the point of subjective equal movement speed between runners and observed (i) an underestimation of chaser speed if the runners moved towards the viewer, and (ii) an overestimation of chaser speed if the runners moved away from the viewer, leading to a catch-up illusion in case of equidistant runners. The bias was independent of the richness of available self-movement cues. Results are inconsistent with computing individual speeds, relying on constant visual angles, expansion rates, occlusions, or relative distances but are consistent with inducing the impression of relative movement through perceptually compressing and enlarging inter-runner distance. This mechanism should be considered when predicting human behavior in complex situations with multiple objects moving in depth such as driving or team sports

    The perception of relative speed of two bodies as a function of independent observer movement

    No full text
    Various studies examined the perception of two moving objects from a static viewpoint or observer movement relative to a reference. However, the influence of observer movement on the perception of relative movement between two other bodies was not thoroughly examined yet. Participants watched two virtual characters running after each other from behind and judged whether the chaser was catching up or falling behind. We adapted the chaser’s speed within three staircases to fit a psychometric function targeting at the point of subjective equality of speeds of the characters (PSE) and the just noticeable difference of speeds (JND). This procedure was repeated for an observer who is static or moving with 50%, 100%, or 150% of the speed of the chased person which itself was constant. JNDs were comparable for all observer speeds. However, PSEs increased with the observer’s speed showing that observer movement influenced the perception of relative speed of two bodies. The slope of the increase is consistent with a strategy of keeping the partial occlusions of the two characters constant as well as with a strategy of keeping the distance proportion (of the chaser-chased distance on the overall observer-chased distance) constant

    Viewpoint dependency in the recognition of dynamic scenes

    No full text
    In 3 experiments, the question of viewpoint dependency in mental representations of dynamic scenes was addressed. Participants viewed film clips of soccer episodes from 1 or 2 viewpoints; they were then required to discriminate between video stills of the original episode and distractors. Recognition performance was measured in terms of accuracy and speed. The degree of viewpoint deviation between the initial presentation and the test stimuli was varied, as was both the point of time presented by the video stills and participants ’ soccer expertise. Findings suggest that viewers develop a viewpointdependent mental representation similar to the spatial characteristics of the original episode presentation, even if the presentation was spatially inhomogeneous. This article examines one aspect of visual scene perception, namely, whether the specific viewpoint from which a dynamic scene is observed is part of its cognitive representation. For example, consider a televised soccer match. Does the cognitive representation of a pass that leads to a decisive goal strictly depend on the camera viewpoint from which it was shown on television, or would one be able to recognize it with the same speed an
    corecore