3 research outputs found

    Scanpath modeling and classification with Hidden Markov Models

    Get PDF
    How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.published_or_final_versio

    Aging and eye tracking:in the quest for objective biomarkers

    Get PDF

    Analysis of Eye Tracking Data to Measure Situational Awareness in Offshore Drilling Operations

    Get PDF
    In complex, high-stakes tasks such as offshore oil and gas drilling where substantial number of monitoring parameters involve in the operation, the analyses of human operator’s situational awareness (or situation awareness, SA) become more important to avoid severe incidents initiated by the poor cognitive performance. Numerous SA measurement practices have been proposed in the previous researches, however, most of them employed the verbal and behavioral response analyses which are subjective to the researchers’ notions. In this study, an integrated approach combining subjective measures (e.g. verbal responses) with physiological metrics (e.g. eye fixation data) was investigated to seek the benefits for SA analyses in the field of offshore oil and gas drilling. A pre-existing incident based experimental test in a high-fidelity simulator facility was designed for real-time log indicators monitoring tasks, and a set of eye tracking devices collected verbal responses and oculomotor information simultaneously during the real-time tasks. To quantify the verbal responses, scoring metrics were newly developed for this study. The metrics assigned the points to the participants’ verbal responses based on their uttered keywords (abnormal, kick or blow-out) reacting to the situation. Quantitative statistical analyses were applied to ocular observations and verbal response scores collected from the predesigned Areas of Interests (AOIs) on the monitoring screen, using one-way ANOVA and Friedman test, respectively. The analyses provided unique and complementary insights that mapped the measures from both metrics to the level of situation awareness and that helped understand the cognitive process in time critical decision-making tasks in offshore oil and gas field. In addition to the statistical investigation, data mining approach using time series clustering technique was introduced to group the participants’ scanning pattern with respect to the temporal sequences, and to find the correlation of the scanning pattern to the quantified situation awareness. According to the analysis results, the expertise of the participants affected on their cognitive mechanisms to identify and respond to the situations to some extent. The content and timing of the situation also served as one of the important factors to determine the level of situation awareness. The participants’ scan patterns were clustered into four groups and suggested a potential correlation between the visual scanning pattern and the quantified situation awareness (i.e. verbal response scores). It was found that the vertical attending tendencies to the individual logs might lead a higher comprehension of the situation than the horizontally transitional attending tendencies between different logs
    corecore