777 research outputs found
Exploring Eye Tracking Data on Source Code via Dual Space Analysis
Eye tracking is a frequently used technique to collect data capturing users\u27 strategies and behaviors in processing information. Understanding how programmers navigate through a large number of classes and methods to find bugs is important to educators and practitioners in software engineering. However, the eye tracking data collected on realistic codebases is massive compared to traditional eye tracking data on one static page. The same content may appear in different areas on the screen with users scrolling in an Integrated Development Environment (IDE). Hierarchically structured content and fluid method position compose the two major challenges for visualization. We present a dual-space analysis approach to explore eye tracking data by leveraging existing software visualizations and a new graph embedding visualization. We use the graph embedding technique to quantify the distance between two arbitrary methods, which offers a more accurate visualization of distance with respect to the inherent relations, compared with the direct software structure and the call graph. The visualization offers both naturalness and readability showing time-varying eye movement data in both the content space and the embedded space, and provides new discoveries in developers\u27 eye tracking behaviors.
Adviser: Hongfeng Y
Immaturities in Reward Processing and Its Influence on Inhibitory Control in Adolescence
The nature of immature reward processing and the influence of rewards on basic elements of cognitive control during adolescence are currently not well understood. Here, during functional magnetic resonance imaging, healthy adolescents and adults performed a modified antisaccade task in which trial-by-trial reward contingencies were manipulated. The use of a novel fast, event-related design enabled developmental differences in brain function underlying temporally distinct stages of reward processing and response inhibition to be assessed. Reward trials compared with neutral trials resulted in faster correct inhibitory responses across ages and in fewer inhibitory errors in adolescents. During reward trials, the blood oxygen level–dependent signal was attenuated in the ventral striatum in adolescents during cue assessment, then overactive during response preparation, suggesting limitations during adolescence in reward assessment and heightened reactivity in anticipation of reward compared with adults. Importantly, heightened activity in the frontal cortex along the precentral sulcus was also observed in adolescents during reward-trial response preparation, suggesting reward modulation of oculomotor control regions supporting correct inhibitory responding. Collectively, this work characterizes specific immaturities in adolescent brain systems that support reward processing and describes the influence of reward on inhibitory control. In sum, our findings suggest mechanisms that may underlie adolescents’ vulnerability to poor decision-making and risk-taking behavior
Saccade Landing Point Prediction Based on Fine-Grained Learning Method
The landing point of a saccade defines the new fixation region, the new region of interest. We asked whether it was possible to predict the saccade landing point early in this very fast eye movement. This work proposes a new algorithm based on LSTM networks and a fine-grained loss function for saccade landing point prediction in real-world scenarios. Predicting the landing point is a critical milestone toward reducing the problems caused by display-update latency in gaze-contingent systems that make real-time changes in the display based on eye tracking. Saccadic eye movements are some of the fastest human neuro-motor activities with angular velocities of up to 1,000°/s. We present a comprehensive analysis of the performance of our method using a database with almost 220,000 saccades from 75 participants captured during natural viewing of videos. We include a comparison with state-of-the-art saccade landing point prediction algorithms. The results obtained using our proposed method outperformed existing approaches with improvements of up to 50% error reduction. Finally, we analyzed some factors that affected prediction errors including duration, length, age, and user intrinsic characteristics.This work was supported in part by the Project BIBECA through MINECO/FEDER under Grant RTI2018-101248-B-100, in part by the
Jose Castillejo Program through MINECO under Grant CAS17/00117, and in part by the National Institutes of Health (NIH) under Grant
P30EY003790 and Grant R21EY023724
Developing an oculomotor brain-computer interface and charactering its dynamic functional network
To date, invasive brain-computer interface (BCI) research has largely focused on replacing lost limb functions using signals from hand/arm areas of motor cortex. However, the oculomotor system may be better suited to BCI applications involving rapid serial selection from spatial targets, such as choosing from a set of possible words displayed on a computer screen in an augmentative and alternative communication application.
First, we develop an intracortical oculomotor BCI based on the delayed saccade paradigm and demonstrate its feasibility to decode intended saccadic eye movement direction in primates. Using activity from three frontal cortical areas implicated in oculomotor production – dorsolateral prefrontal cortex, supplementary eye field, and frontal eye field – we could decode intended saccade direction in real time with high accuracy, particularly at contralateral locations. In a number of analyses in the decoding context, we investigated the amount of saccade-related information contained in different implant regions and in different neural measures. A novel neural measure using power in the 80-500 Hz band is proposed as the optimal signal for this BCI purpose.
In the second part of this thesis, we characterize the interactions between the neural signals recorded from electrodes in these three implant areas. We employ a number of techniques to quantify the spectrotemporal dynamics in this complex network, and we describe the resulting functional connectivity patterns between the three implant regions in the context of eye-movement production. In addition, we compare and contrast the amount of saccade-related information present in the coupling strengths in the network, on both an electrode-to-electrode scale and an area-to-area scale. Different frequency bands stand out during different epochs of the task, and their information contents are distinct between implant regions. For example, the 13-30 Hz band stands out during the delay epoch, and the 8-12 Hz band is relevant during target and response epochs.
This work extends the boundary of BCI research into the oculomotor domain, and invites potential applications by showing its feasibility. Furthermore, it elucidates the complex dynamics of the functional coupling underlying oculomotor production across multiple areas of frontal cortex
Finding any Waldo: zero-shot invariant and efficient visual search
Searching for a target object in a cluttered scene constitutes a fundamental
challenge in daily vision. Visual search must be selective enough to
discriminate the target from distractors, invariant to changes in the
appearance of the target, efficient to avoid exhaustive exploration of the
image, and must generalize to locate novel target objects with zero-shot
training. Previous work has focused on searching for perfect matches of a
target after extensive category-specific training. Here we show for the first
time that humans can efficiently and invariantly search for natural objects in
complex scenes. To gain insight into the mechanisms that guide visual search,
we propose a biologically inspired computational model that can locate targets
without exhaustive sampling and generalize to novel objects. The model provides
an approximation to the mechanisms integrating bottom-up and top-down signals
during search in natural scenes.Comment: Number of figures: 6 Number of supplementary figures: 1
Setting things straight: a comparison of measures of saccade trajectory deviation
In eye movements, saccade trajectory deviation has often been used as a physiological operationalization of visual attention, distraction, or the visual system’s prioritization of different sources of information. However, there are many ways to measure saccade trajectories and to quantify their deviation. This may lead to noncomparable results and poses the problem of choosing a method that will maximize statistical power. Using data from existing studies and from our own experiments, we used principal components analysis to carry out a systematic quantification of the relationships among eight different measures of saccade trajectory deviation and their power to detect the effects of experimental manipulations, as measured by standardized effect size. We concluded that (1) the saccade deviation measure is a good default measure of saccade trajectory deviation, because it is somewhat correlated with all other measures and shows relatively high effect sizes for two well-known experimental effects; (2) more generally, measures made relative to the position of the saccade target are more powerful; and (3) measures of deviation based on the early part of the saccade are made more stable when they are based on data from an eyetracker with a high sampling rate. Our recommendations may be of use to future eye movement researchers seeking to optimize the designs of their studies
Does pictorial composition guide the eye? Investigating four centuries of last supper pictures
Within art literature, there is a centuries-old assumption that the eye follows the lines set out by the composition of a painting. However, recent empirical findings suggest that this may not be true. This study investigates beholders’ saccadic eye movements while looking at fourteen paintings representing the scene of the Last Supper, and their perception of the compositions of those paintings. The experiment included three parts: 1) recording the eye movements of the participants looking at the paintings; 2) asking participants to draw the composition of the paintings; and 3) asking them to rate the amount of depth in the paintings. We developed a novel coefficient of similarity in order to quantify 1) the similarity between the saccades of different observers; 2) the similarity between the compositional drawings of different observers; and 3) the similarity between saccades and compositional drawings. For all of the tested paintings, we found a high, above-chance similarity between the saccades and between the compositional drawings. Additionally, for most of the paintings, we also found a high, above-chance similarity between compositional lines and saccades, both on a collective and on an individual level. Ultimately, our findings suggest that composition does influence visual perception. 
GraFIX: a semiautomatic approach for parsing low- and high-quality eye-tracking data
Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms’ outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data
Eye tracking and visual arts. Introduction to the special thematic issue
There is no visual art without the eye, just like no music without the ear. Visual art does not happen in the eye, but it has to go through the eye. Even for artworks with little visual focus, as in Conceptual Art, we need eyes to create and receive them. In order to see we need to move our eyes. It is therefore not surprising that, for centuries, the eye and its movements have been a major topic of literature on art. It is equally unsurprising that along recent technological improvements of eye tracking, this technology has become prolific for studying visual arts. This special issue of the Journal of Eye Movement Research is the first platform that provides a broad picture of recent developments in this area. In this introduction we present a history of eye movement in art literature, followed by a sketch of some of the oculometric parameters used for studies of visual art. In the third section we showcase each contribution to this special issue
- …