11 research outputs found

    Analysing eye-tracking data: From scanpaths and heatmaps to the dynamic visualisation of areas of interest

    Get PDF
    International audienceTo understand the visual behaviors of people searching for information on Web pages, heatmaps and Areas Of Interest (AOI) are generally used. These two techniques bring interesting information on how Web pages are scanned by several users. However, two remarks can be expressed: the first one relates to the fact that heatmaps are usually used to represent fixation areas for a given task after it is completed. Thus, it does not represent fixation areas over time. The second remark relates to the use of AOI, which must be defined by the analyst. We present a method, which address these two points. This bottom-up approach is based on a mean-shift clustering procedure for the identification of areas of interest, which takes into account the temporal aspect. The identification of AOI is thus data driven. This approach allows us to show the evolution of a posteriori AOI both in space and time. The limitations and implications of this new approach are discusse

    Scanpath modeling and classification with Hidden Markov Models

    Get PDF
    How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.published_or_final_versio

    Event-driven Similarity and Classification of Scanpaths

    Get PDF
    Eye tracking experiments often involve recording the pattern of deployment of visual attention over the stimulus as viewers perform a given task (e.g., visual search). It is useful in training applications, for example, to make available an expert\u27s sequence of eye movements, or scanpath, to novices for their inspection and subsequent learning. It may also be potentially useful to be able to assess the conformance of the novice\u27s scanpath to that of the expert. A computational tool is proposed that provides a framework for performing such classification, based on the use of a probabilistic machine learning algorithm. The approach was influenced by the need to compute similarity of eye fixations at single points in time, such as would be required for video stimuli. This method is also useful for eye movement analysis over static images and some interactive tasks. The algorithm employs a common qualitative omparison method, the heatmap, in a quantitative way to measure deviation from group aggregate behavior. This quantitative comparison is performed at individual events, defined by the stimulus, such as frame timestamps of video or mouseclicks of interactive tasks. The algorithm is evaluated and found to be more accurate and discriminative than existing comparison algorithms for the stimuli used in the examined experiments

    The influence of training level on manual flight in connection to performance, scan pattern, and task load.

    Get PDF
    This work focuses on the analysis of pilots’ performance during manual flight operations in different stages of training and situations and also examines the influence of training on gaze strategy. The secure and safe operation of air traffic is highly dependent on the individual performances of the pilots. Before becoming a pilot, he/she has to acquire a broad set of skills by training to pass all the necessary qualification and licensing standards. A basic skill for every pilot is manual control operations, which is a closed-loop control process with several cross-coupled variables. Even with increased automation in the cockpit, the manual control operations are essential for every pilot as a last resort in the event of automation failure. A key element in the analysis of manual flight operations is the development over time in relation to performance and visual perception. An experiment with 28 participants (including 11 certified pilots) was conducted in a Boeing 737 simulated in a high-fidelity setting. For defined flight phases, the dynamic time warping method was applied to evaluate the performance for selected criteria, and eye-tracking methodology was utilized to analyze the gaze-pattern development. The manipulation of experience and workload influences the performance and the gaze pattern at the same time. Findings suggest that the increase of workload has an increased effect on pilots depending on the flight phase. Gaze patterns from experienced pilots provide insights into the training requirements of both novices and experts. The connection between workload, performance and gaze pattern is complex and needs to be analyzed under as many differing conditions. The results imply the necessity to evaluate manual flight operations with respect to more flight phases and a detailed selection of performance indications

    Detecting expert’s eye using a multiple-kernel Relevance Vector Machine

    Get PDF
    Decoding mental states from the pattern of neural activity or overt behavior is an intensely pursued goal. Here we applied machine learning to detect expertise from the oculomotor behavior of novice and expert billiard players during free viewing of a filmed billiard match with no specific task, and in a dynamic trajectory prediction task involving ad-hoc, occluded billiard shots. We have adopted a ground framework for feature space fusion and a Bayesian sparse classifier, namely, a Relevance Vector Machine. By testing different combinations of simple oculomotor features (gaze shifts amplitude and direction, and fixation duration), we could classify on an individual basis which group - novice or expert - the observers belonged to with an accuracy of 82% and 87%, respectively for the match and the shots. These results provide evidence that, at least in the particular domain of billiard sport, a signature of expertise is hidden in very basic aspects of oculomotor behavior, and that expertise can be detected at the individual level both with ad-hoc testing conditions and under naturalistic conditions - and suitable data mining. Our procedure paves the way for the development of a test for the \u201cexpert\u2019s eye\u201d, and promotes the use of eye movements as an additional signal source in Brain-Computer-Interface (BCI) systems

    Graph Format Effects in Processing Health Outcome Information

    Get PDF
    Title from PDF of title page, viewed October 30, 2017Thesis advisor: Joan M. McDowdVitaIncludes bibliographical references (pages 94-102)Thesis (M.A.)--Department of Psychology. University of Missouri--Kansas City, 2017Decision support tools that incorporate predictive risk estimates can be used to assist patients and their families in making better-informed choices about treatment options. The format utilized to present predictive risk estimates can influence risk perception and treatment decisions. The study reported here investigated the influence of graph format on information processing and decision-making in relation to rt-PA therapy for stroke. Forty-five older adults were asked to make a hypothetical decision about rt-PA while viewing rt-PA risk information presented in one of three graph formats. Eye tracking, scan path, and transition analysis were used to investigate differences in information processing by graph format. Graph format did not affect whether or not study participants said yes to rt-PA treatment. There was an effect of graph format on decisional uncertainty, study time, and memory accuracy. Mean fixation densities and common transitions were significantly different by information area, graph format. and time epoch. Whether graph format alone can influence decision strategies enough to affect choice remains an open question. However, using fixation density and transition probabilities together appears to be a viable means of inferring information processing and discerning information processing differences.Journal article -- Literature review -- Original proposal summary and methods -- Analyses -- Appendix A. Literature Review Figures -- Appendix B. Informed Consent -- Appendix C. Cognitive Aging Conference Poster -- Appendix D. iHIITS Demographics -- Appendix E. Working with Numbers -- Appendix F. Working with Graphs -- Appendix G. Decisional Conflict Scale -- Appendix H. Comprehension Qui
    corecore