121 research outputs found

    On Real-Time Synthetic Primate Vision

    Get PDF
    The primate vision system exhibits numerous capabilities. Some important basic visual competencies include: 1) a consistent representation of visual space across eye movements; 2) egocentric spatial perception; 3) coordinated stereo fixation upon and pursuit of dynamic objects; and 4) attentional gaze deployment. We present a synthetic vision system that incorporates these competencies.We hypothesize that similarities between the underlying synthetic system model and that of the primate vision system elicit accordingly similar gaze behaviors. Psychophysical trials were conducted to record human gaze behavior when free-viewing a reproducible, dynamic, 3D scene. Identical trials were conducted with the synthetic system. A statistical comparison of synthetic and human gaze behavior has shown that the two are remarkably similar

    Why do we look at people's eyes?

    Get PDF
    We have previously shown that when observers are presented with complex natural scenes that contain a number of objects and people, observers look mostly at the eyes of the people. Why is this? It cannot be because eyes are merely the most salient area in a scene, as relative to other objects they are fairly inconspicuous. We hypothesized that people look at the eyes because they consider the eyes to be a rich source of information. To test this idea, we tested two groups of participants. One set of participants, called the Told Group, was informed that there would be a recognition test after they were shown the natural scenes. The second set, the Not Told Group, was not informed that there would be a subsequent recognition test. Our data showed that during the initial and test viewings, the Told Group fixated the eyes more frequently than the Not Told group, supporting the idea that the eyes are considered an informative region in social scenes. Converging evidence for this interpretation is that the Not Told Group fixated the eyes more frequently in the test session than in the study session

    Orienting in virtual environments: how are surface features and environmental geometry weighted in an orientation task?

    Get PDF
    a b s t r a c t We investigated how human adults orient in enclosed virtual environments, when discrete landmark information is not available and participants have to rely on geometric and featural information on the environmental surfaces. In contrast to earlier studies, where, for women, the featural information from discrete landmarks overshadowed the encoding of the geometric information, Experiment 1 showed that when featural information is conjoined with the environmental surfaces, men and women encoded both types of information. Experiment 2 showed that, although both types of information are encoded, performance in locating a goal position is better if it is close to a geometrically or featurally distinct location. Furthermore, although features are relied upon more strongly than geometry, initial experience with an environment influences the relative weighting of featural and geometric cues. Taken together, these results show that human adults use a flexible strategy for encoding spatial information

    A Cognitive Ethology Study of First- and Third-Person Perspectives

    Get PDF
    The present investigation was funded by a grant awarded to AK by the Natural Sciences and Engineering Council of Canada. The funders had no role in study design, data collection, and analysis, decision to publish, or preparation of the manuscript.Peer reviewedPublisher PD

    Gaze Restriction and Reactivation of Place-bound Content Drive Eye Movements During Mental Imagery

    Get PDF
    When we imagine a picture, we move our eyes even though the picture is physically not present. These eye movements provide information about the ongoing process of mental imagery. Eye movements unfold over time, and previous research has shown that the temporal gaze dynamics of eye movements in mental imagery have unique properties, which are unrelated to those in perception. In mental imagery, refixations of previously fixated locations happen more often and in a more systematic manner than in perception. The origin of these unique properties remains unclear. We tested how the temporal structure of eye movements is influenced by the complexity of the mental image. Participants briefly saw and then maintained a pattern stimulus, consisting of one (easy condition) to four black segments (most difficult condition). When maintaining a simple pattern in imagery, participants restricted their gaze to a narrow area, and for more complex stimuli, eye movements were more spread out to distant areas. At the same time, fewer refixations were made in imagery when the stimuli were complex. The results show that refixations depend on the imagined content. While fixations of stimulus-related areas reflect the so-called ‘looking at nothing’ effect, gaze restriction emphasizes differences between mental imagery and perception

    Perspective taking and theory of mind in hide and seek

    Get PDF
    Does theory of mind play a significant role in where people choose to hide an item or where they search for an item that has been hidden? Adapting the "Hide-Find Paradigm" of Anderson et al. (2014), participants viewed homogenous or popout visual arrays on a touchscreen table. Their task was to indicate where in the array they would hide an item, or to search for an item that had been hidden, by either a friend or a foe. Critically, participants believed that their sitting location at the touchtable was the same as - or opposite to - their partner's location. Replicating Anderson et al. participants tended to (1) select items nearer to themselves on homogenous displays, and this bias was stronger for a friend than foe; and (2) select popout items, and again, more for a friend than foe. These biases were observed only when participants believed that they shared the same physical perspective as their partner. Collectively the data indicate that theory of mind plays a significant role in hiding and finding, and demonstrate that the hide-find paradigm is a powerful tool for investigating theory of mind in adults

    Turning the (virtual) world around: Patterns in saccade direction vary with picture orientation and shape in virtual reality

    Get PDF
    Research investigating gaze in natural scenes has identified a number of spatial biases in where people look, but it is unclear whether these are partly due to constrained testing environments (e.g., a participant with their head restrained and looking at a landscape image framed within a computer monitor). We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influence eye and head movements in virtual reality (VR). Both the eyes and head were tracked while observers looked at natural scenes in a virtual environment. In line with previous work, we found a bias for saccade directions parallel to the image horizon, regardless of image shape or content. We found that, when allowed to do so, observers move both their eyes and head to explore images. Head rotation, however, was idiosyncratic; some observers rotated a lot, whereas others did not. Interestingly, the head rotated in line with the rotation of landscape but not fractal images. That head rotation and gaze direction respond differently to image content suggests that they may be under different control systems. We discuss our findings in relation to current theories on head and eye movement control and how insights from VR might inform more traditional eye-tracking studies

    A prototypical non-malignant epithelial model to study genome dynamics and concurrently monitor micro-RNAs and proteins in situ during oncogene-induced senescence

    Full text link

    Short Title: Interactive Image Interpretation Corresponding author:

    No full text
    We consider the problem of spatial-temporal modeling of interactive image interpretation. The interactive process is composed of a sequential prediction step and a change detection step. Combining the two steps leads to a semi-automatic predictor that can be applied to a time-series, yields good predictions, and requests new human input when a change point is detected. The model can effectively capture changes of image features and gradually adapts to them. We propose an online framework that naturally addresses these problems in a unified manner. Our empirical study with a synthetic data set and a road tracking dataset demonstrate the efficiency of the proposed approach
    corecore