23,663 research outputs found

    Eye tracking and visualization. Introduction to the Special Thematic Issue

    Get PDF
    There is a growing interest in eye tracking technologies applied to support traditional visualization techniques like diagrams, charts, maps, or plots, either static, animated, or interactive ones. More complex data analyses are required to derive knowledge and meaning from the data. Eye tracking systems serve that purpose in combination with biological and computer vision, cognition, perception, visualization,  human-computer-interaction, as well as usability and user experience research. The 10 articles collected in this thematic special issue provide interesting examples how sophisticated methods of data analysis and representation enable researchers to discover and describe fundamental spatio-temporal regularities in the data. The human visual system, supported by appropriate visualization tools, enables the human operator to solve complex tasks, like understanding and interpreting three-dimensional medical images, controlling air traffic by radar displays, supporting instrument flight tasks, or interacting with virtual realities. The development and application of new visualization techniques is of major importance for future technological progress

    Trends and Techniques in Visual Gaze Analysis

    Full text link
    Visualizing gaze data is an effective way for the quick interpretation of eye tracking results. This paper presents a study investigation benefits and limitations of visual gaze analysis among eye tracking professionals and researchers. The results were used to create a tool for visual gaze analysis within a Master's project.Comment: pages 89-93, The 5th Conference on Communication by Gaze Interaction - COGAIN 2009: Gaze Interaction For Those Who Want It Most, ISBN: 978-87-643-0475-

    Are all the frames equally important?

    Full text link
    In this work, we address the problem of measuring and predicting temporal video saliency - a metric which defines the importance of a video frame for human attention. Unlike the conventional spatial saliency which defines the location of the salient regions within a frame (as it is done for still images), temporal saliency considers importance of a frame as a whole and may not exist apart from context. The proposed interface is an interactive cursor-based algorithm for collecting experimental data about temporal saliency. We collect the first human responses and perform their analysis. As a result, we show that qualitatively, the produced scores have very explicit meaning of the semantic changes in a frame, while quantitatively being highly correlated between all the observers. Apart from that, we show that the proposed tool can simultaneously collect fixations similar to the ones produced by eye-tracker in a more affordable way. Further, this approach may be used for creation of first temporal saliency datasets which will allow training computational predictive algorithms. The proposed interface does not rely on any special equipment, which allows to run it remotely and cover a wide audience.Comment: CHI'20 Late Breaking Work

    Design Guidelines for Agent Based Model Visualization

    Get PDF
    In the field of agent-based modeling (ABM), visualizations play an important role in identifying, communicating and understanding important behavior of the modeled phenomenon. However, many modelers tend to create ineffective visualizations of Agent Based Models (ABM) due to lack of experience with visual design. This paper provides ABM visualization design guidelines in order to improve visual design with ABM toolkits. These guidelines will assist the modeler in creating clear and understandable ABM visualizations. We begin by introducing a non-hierarchical categorization of ABM visualizations. This categorization serves as a starting point in the creation of an ABM visualization. We go on to present well-known design techniques in the context of ABM visualization. These techniques are based on Gestalt psychology, semiology of graphics, and scientific visualization. They improve the visualization design by facilitating specific tasks, and providing a common language to critique visualizations through the use of visual variables. Subsequently, we discuss the application of these design techniques to simplify, emphasize and explain an ABM visualization. Finally, we illustrate these guidelines using a simple redesign of a NetLogo ABM visualization. These guidelines can be used to inform the development of design tools that assist users in the creation of ABM visualizations.Visualization, Design, Graphics, Guidelines, Communication, Agent-Based Modeling

    Effects of Intraframe Distortion on Measures of Cone Mosaic Geometry from Adaptive Optics Scanning Light Ophthalmoscopy

    Get PDF
    Purpose: To characterize the effects of intraframe distortion due to involuntary eye motion on measures of cone mosaic geometry derived from adaptive optics scanning light ophthalmoscope (AOSLO) images. Methods: We acquired AOSLO image sequences from 20 subjects at 1.0, 2.0, and 5.08 temporal from fixation. An expert grader manually selected 10 minimally distorted reference frames from each 150-frame sequence for subsequent registration. Cone mosaic geometry was measured in all registered images (n ¼ 600) using multiple metrics, and the repeatability of these metrics was used to assess the impact of the distortions from each reference frame. In nine additional subjects, we compared AOSLO-derived measurements to those from adaptive optics (AO)-fundus images, which do not contain system-imposed intraframe distortions. Results: We observed substantial variation across subjects in the repeatability of density (1.2%–8.7%), inter-cell distance (0.8%–4.6%), percentage of six-sided Voronoi cells (0.8%–10.6%), and Voronoi cell area regularity (VCAR) (1.2%–13.2%). The average of all metrics extracted from AOSLO images (with the exception of VCAR) was not significantly different than those derived from AO-fundus images, though there was variability between individual images. Conclusions: Our data demonstrate that the intraframe distortion found in AOSLO images can affect the accuracy and repeatability of cone mosaic metrics. It may be possible to use multiple images from the same retinal area to approximate a ‘‘distortionless’’ image, though more work is needed to evaluate the feasibility of this approach. Translational Relevance: Even in subjects with good fixation, images from AOSLOs contain intraframe distortions due to eye motion during scanning. The existence of these artifacts emphasizes the need for caution when interpreting results derived from scanning instruments

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours
    • …
    corecore