241 research outputs found
The new man and the new world the influence of Renaissance humanism on the explorers of the Italian era of discovery
In contemporary research, microsaccade detection is typically performed using the calibrated gaze-velocity signal acquired from a video-based eye tracker. To generate this signal, the pupil and corneal reflection (CR) signals are subtracted from each other and a differentiation filter is applied, both of which may prevent small microsaccades from being detected due to signal distortion and noise amplification. We propose a new algorithm where microsaccades are detected directly from uncalibrated pupil-, and CR signals. It is based on detrending followed by windowed correlation between pupil and CR signals. The proposed algorithm outperforms the most commonly used algorithm in the field (Engbert & Kliegl, 2003), in particular for small amplitude microsaccades that are difficult to see in the velocity signal even with the naked eye. We argue that it is advantageous to consider the most basic output of the eye tracker, i.e. pupil-, and CR signals, when detecting small microsaccades
Rashba polarization of bulk continuum states
5 páginas, 6 figuras.-- PACS number(s): 71.70.Ej, 73.20.−r, 71.15.ApSpin-orbit coupling is shown to lead to a Rashba-type spin polarization of bulk continuum states at the surface of a nonmagnetic system. A qualitative analysis for a model one-dimensional system is presented, as well as ab initio calculations for (111) surfaces of a number of fcc metals. The effect is interpreted in terms of the reflection of the relativistic Bloch waves from the surface barrier, which leads to a beating of the spin density.The authors acknowledge partial support
from the University of the Basque Country (Grant
No. GIC07IT36607) and the Spanish Ministerio de Ciencia
e Innovación (Grant No. FIS2010-19609-C02-00).Peer reviewe
Optic flow information influencing heading perception during rotation
Poster Session - Perception and Action: abstract no. 22.34We investigated what roles global spatial frequency, surface structure, and foreground motion play in heading perception during simulated rotation from optic flow. The display …postprin
Disentangling the effects of object position and motion on heading judgments in the presence of a moving object
Tuesday Morning Posters - Motion Perception: Optic flow and heading: no. 53.4026Previous research has found that moving objects bias heading perception only when they occlude the focus of expansion (FOE) in the background optic flow, with the direction of bias depending on whether the moving object was approached or at a fixed distance from the moving observer. However, the effect of object motion on heading perception was confounded with object position in previous studies. Here, we disentangled the contributions of object motion and position to heading bias. In each 1s trial, the display simulated forward observer motion at 1 m/s through a ...postprin
Searching with and against each other: Spatiotemporal coordination of visual search behavior in collaborative and competitive settings
Although in real life people frequently perform visual search together, in lab experiments this social dimension is typically left out. Here, we investigate individual, collaborative and competitive visual search with visualization of search partners' gaze. Participants were instructed to search a grid of Gabor patches while being eye tracked. For collaboration and competition, searchers were shown in real time at which element the paired searcher was looking. To promote collaboration or competition, points were rewarded or deducted for correct or incorrect answers. Early in collaboration trials, searchers rarely fixated the same elements. Reaction times of couples were roughly halved compared with individual search, although error rates did not increase. This indicates searchers formed an efficient collaboration strategy. Overlap, the proportion of dwells that landed on hexagons that the other searcher had already looked at, was lower than expected from simulated overlap of two searchers who are blind to the behavior of their partner. The proportion of overlapping dwells correlated positively with ratings of the quality of collaboration. During competition, overlap increased earlier in time, indicating that competitors divided space less efficiently. Analysis of the entropy of the dwell locations and scan paths revealed that in the competition condition, a less fixed looking pattern was exhibited than in the collaborate and individual search conditions. We conclude that participants can efficiently search together when provided only with information about their partner's gaze position by dividing up the search space. Competing search exhibited more random gaze patterns, potentially reflecting increased interaction between searchers in this condition
Influence of Hemianopic Visual Field Loss on Visual Motor Control
Background: Homonymous hemianopia (HH) is an anisotropic visual impairment characterized by the binocular inability to see one side of the visual field. Patients with HH often misperceive visual space. Here we investigated how HH affects visual motor control. Methods and Findings: Seven patients with complete HH and no neglect or cognitive decline and seven gender- and age-matched controls viewed displays in which a target moved randomly along the horizontal or the vertical axis. They used a joystick to control the target movement to keep it at the center of the screen. We found that the mean deviation of the target position from the center of the screen along the horizontal axis was biased toward the blind side for five out of seven HH patients. More importantly, while the normal vision controls showed more precise control and larger response amplitudes when the target moved along the horizontal rather than the vertical axis, the control performance of the HH patients was not different between these two target motion experimental conditions. Conclusions: Compared with normal vision controls, HH affected patients' control performance when the target moved horizontally (i.e., along the axis of their visual impairment) rather than vertically. We conclude that hemianopia affects the use of visual information for online control of a moving target specific to the axis of visual impairment. The implications of the findings for driving in hemianopic patients are discussed
Zero-Shot Segmentation of Eye Features Using the Segment Anything Model (SAM)
The advent of foundation models signals a new era in artificial intelligence.
The Segment Anything Model (SAM) is the first foundation model for image
segmentation. In this study, we evaluate SAM's ability to segment features from
eye images recorded in virtual reality setups. The increasing requirement for
annotated eye-image datasets presents a significant opportunity for SAM to
redefine the landscape of data annotation in gaze estimation. Our investigation
centers on SAM's zero-shot learning abilities and the effectiveness of prompts
like bounding boxes or point clicks. Our results are consistent with studies in
other domains, demonstrating that SAM's segmentation effectiveness can be
on-par with specialized models depending on the feature, with prompts improving
its performance, evidenced by an IoU of 93.34% for pupil segmentation in one
dataset. Foundation models like SAM could revolutionize gaze estimation by
enabling quick and easy image segmentation, reducing reliance on specialized
models and extensive manual annotation.Comment: 14 pages, 8 figures, 1 table, submitted to ETRA 2024: ACM Symposium
on Eye Tracking Research & Application
Precise localization of corneal reflections in eye images using deep learning trained on synthetic data
We present a deep learning method for accurately localizing the center of a
single corneal reflection (CR) in an eye image. Unlike previous approaches, we
use a convolutional neural network (CNN) that was trained solely using
simulated data. Using only simulated data has the benefit of completely
sidestepping the time-consuming process of manual annotation that is required
for supervised training on real eye images. To systematically evaluate the
accuracy of our method, we first tested it on images with simulated CRs placed
on different backgrounds and embedded in varying levels of noise. Second, we
tested the method on high-quality videos captured from real eyes. Our method
outperformed state-of-the-art algorithmic methods on real eye images with a 35%
reduction in terms of spatial precision, and performed on par with
state-of-the-art on simulated images in terms of spatial accuracy.We conclude
that our method provides a precise method for CR center localization and
provides a solution to the data availability problem which is one of the
important common roadblocks in the development of deep learning models for gaze
estimation. Due to the superior CR center localization and ease of application,
our method has the potential to improve the accuracy and precision of CR-based
eye trackersComment: Published in Behavioural Research Method
- …