6 research outputs found

    Visual Multi-Metric Grouping of Eye-Tracking Data

    Get PDF
    We present an algorithmic and visual grouping of participants and eye-tracking metrics derived from recorded eye-tracking data. Our method utilizes two well-established visualization concepts. First, parallel coordinates are used to provide an overview of the used metrics, their interactions, and similarities, which helps select suitable metrics that describe characteristics of the eye-tracking data. Furthermore, parallel coordinates plots enable an analyst to test the effects of creating a combination of a subset of metrics resulting in a newly derived eye-tracking metric. Second, a similarity matrix visualization is used to visually represent the affine combination of metrics utilizing an algorithmic grouping of subjects that leads to distinct visual groups of similar behavior. To keep the diagrams of the matrix visualization simple and understandable, we visually encode our eye- tracking data into the cells of a similarity matrix of participants. The algorithmic grouping is performed with a clustering based on the affine combination of metrics, which is also the basis for the similarity value computation of the similarity matrix. To illustrate the usefulness of our visualization, we applied it to an eye-tracking data set involving the reading behavior of metro maps of up to 40 participants. Finally, we discuss limitations and scalability issues of the approach focusing on visual and perceptual issues

    Eye tracking and visualization. Introduction to the Special Thematic Issue

    Get PDF
    There is a growing interest in eye tracking technologies applied to support traditional visualization techniques like diagrams, charts, maps, or plots, either static, animated, or interactive ones. More complex data analyses are required to derive knowledge and meaning from the data. Eye tracking systems serve that purpose in combination with biological and computer vision, cognition, perception, visualization,  human-computer-interaction, as well as usability and user experience research. The 10 articles collected in this thematic special issue provide interesting examples how sophisticated methods of data analysis and representation enable researchers to discover and describe fundamental spatio-temporal regularities in the data. The human visual system, supported by appropriate visualization tools, enables the human operator to solve complex tasks, like understanding and interpreting three-dimensional medical images, controlling air traffic by radar displays, supporting instrument flight tasks, or interacting with virtual realities. The development and application of new visualization techniques is of major importance for future technological progress

    The role of format familiarity and word frequency in Chinese reading

    Get PDF
    For Chinese readers, reading from left to right is the norm, while reading from right to left is unfamiliar. This study comprises two experiments investigating how format familiarity and word frequency affect reading by Chinese people. Experiment 1 examines the roles of format familiarity (reading from left to right is the familiar Chinese format, and reading from right to left is the unfamiliar Chinese format) and word frequency in vocabulary recognition. Forty students read the same Chinese sentences from left to right and from right to left. Target words were divided into high and low frequency words. In Experiment 2, participants engaged in right-to-left reading training for 10 days to test whether their right-to-left reading performance could be improved. The study yields several main findings. First, format familiarity affects vocabulary recognition. Participants reading from left to right had shorter fixation times, higher skipping rates, and viewing positions closer to word center.. Second,  word frequency affects vocabulary recognition in Chinese reading. Third, right-to-left reading training could improve reading performance. In the early indexes, the interaction effect of format familiarity and word frequency was significant. There was also a significant word-frequency effect from left to right but not from right to left. Therefore, word segmentation and vocabulary recognition may be sequential in Chinese reading

    An Intelligent and Low-cost Eye-tracking System for Motorized Wheelchair Control

    Full text link
    In the 34 developed and 156 developing countries, there are about 132 million disabled people who need a wheelchair constituting 1.86% of the world population. Moreover, there are millions of people suffering from diseases related to motor disabilities, which cause inability to produce controlled movement in any of the limbs or even head.The paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly without having to rely on others utilizing an eye-controlled electric wheelchair. The system input was images of the users eye that were processed to estimate the gaze direction and the wheelchair was moved accordingly. To accomplish such a feat, four user-specific methods were developed, implemented and tested; all of which were based on a benchmark database created by the authors.The first three techniques were automatic, employ correlation and were variants of template matching, while the last one uses convolutional neural networks (CNNs). Different metrics to quantitatively evaluate the performance of each algorithm in terms of accuracy and latency were computed and overall comparison is presented. CNN exhibited the best performance (i.e. 99.3% classification accuracy), and thus it was the model of choice for the gaze estimator, which commands the wheelchair motion. The system was evaluated carefully on 8 subjects achieving 99% accuracy in changing illumination conditions outdoor and indoor. This required modifying a motorized wheelchair to adapt it to the predictions output by the gaze estimation algorithm. The wheelchair control can bypass any decision made by the gaze estimator and immediately halt its motion with the help of an array of proximity sensors, if the measured distance goes below a well-defined safety margin.Comment: Accepted for publication in Sensor, 19 Figure, 3 Table

    Evaluation of 2D combination of eye-tracking metrics for task distinction

    Get PDF
    Eye-tracking techniques enable researchers to observe human behaviors by using eye tracking metrics. Machine learning is one of the techniques used in task inference. However, in our research in order to decrease the effort to analyze the task inference, we consider two combinations of different metrics on a two-dimensional scatter plot. Also, we analyze the data with K-Means clustering and correlation analysis to determine the task inference. Two-dimensional scatter plot let the analyst interact with the data in a better manner. In this thesis, we reduced the metrics dimensions, for example, calculating the mean value of the fixation durations that gave us a single value. We examined a few metrics such as crossings of saccades, first fixation duration after the onset of a stimulus, fixation duration mean, and fixation duration median. Furthermore, we created some custom metrics specifically for this research to analyze the tasks for the participants better. Next, we developed a simple game. In the game, there were three game modes for building distinctive gaze behavior. Those game modes include changes in the color tint information, size changes of the stimulus, and as a control mode, a text-only representation which does not contain any color or size differences. Finally, we made a study with six participants. They played our game to give us a dataset which we can work in the analysis with K-means clustering. Nevertheless, the results were promising and helpful in distinguishing human behavior on different tasks. However, this research is not enough for task inference, and there are further improvements which could achieve a better result than the current state

    Visual Multi-Metric Grouping of Eye-Tracking Data

    No full text
    We present an algorithmic and visual grouping of participants and eye-tracking metrics derived from recorded eye-tracking data. Our method utilizes two well-established visualization concepts. First, parallel coordinates are used to provide an overview of the used metrics, their interactions, and similarities, which helps select suitable metrics that describe characteristics of the eye-tracking data. Furthermore, parallel coordinates plots enable an analyst to test the effects of creating a combination of a subset of metrics resulting in a newly derived eye-tracking metric. Second, a similarity matrix visualization is used to visually represent the affine combination of metrics utilizing an algorithmic grouping of subjects that leads to distinct visual groups of similar behavior. To keep the diagrams of the matrix visualization simple and understandable, we visually encode our eye-tracking data into the cells of a similarity matrix of participants. The algorithmic grouping is performed with a clustering based on the affine combination of metrics, which is also the basis for the similarity value computation of the similarity matrix. To illustrate the usefulness of our visualization, we applied it to an eye-tracking data set involving the reading behavior of metro maps of up to 40 participants. Finally, we discuss limitations and scalability issues of the approach focusing on visual and perceptual issues
    corecore