6 research outputs found

    Prospects and pitfalls in combining eye tracking data and verbal reports

    Get PDF
    It is intuitively appealing to try to combine eye-tracking data and verbal reports when investigating medical image interpretation. However, before collecting such data, important decisions have to be made including exactly when and how to collect the verbal reports. The purpose of this methodological article is to reflect upon the pros and cons of different solutions and to offer some guidelines to investigators. We start by exploring the ontology of vision and speech production and the epistemology of eye movements to grasp what fixations and verbal reports actually reflect. We are also interested in the major constraints of the two systems. Second, we elaborate on two dominant investigational approaches to verbal accounts, namely concurrent think-aloud and Chi’s explanations, and move on to other approaches. Third, we present and critically evaluate studies from the literature on medical image interpretation that have sought to contrast or integrate eye movement data and verbal reports. Fourth, we conclude with some practical guidelines and suggestions for further research.              

    Prospects and pitfalls in combining eye tracking data and verbal reports

    Get PDF
    It is intuitively appealing to try to combine eye-tracking data and verbal reports when investigating medical image interpretation. However, before collecting such data, important decisions have to be made including exactly when and how to collect the verbal reports. The purpose of this methodological article is to reflect upon the pros and cons of different solutions and to offer some guidelines to investigators. We start by exploring the ontology of vision and speech production and the epistemology of eye movements to grasp what fixations and verbal reports actually reflect. We are also interested in the major constraints of the two systems. Second, we elaborate on two dominant investigational approaches to verbal accounts, namely concurrent think-aloud and Chi’s explanations, and move on to other approaches. Third, we present and critically evaluate studies from the literature on medical image interpretation that have sought to contrast or integrate eye movement data and verbal reports. Fourth, we conclude with some practical guidelines and suggestions for further research</p

    Unterstützung adaptiver Benutzungsschnittstellen mittels Eye-Tracking zur Erkennung von Expertise oder Verstehen

    Get PDF
    Studien zeigen, dass Erledigen von Aufgaben am Computer von der Wahrnehmungsfähigkeit des Anwenders abhängt. Die Kommunikation zwischen Anwender und Computersystemen erfordert hohe Anforderungen an die Benutzerschnittstelle, die für eine Interaktion zwischen Benutzer und Software verantwortlich ist. Eine adaptive Benutzerschnittstelle vereinfacht und verbessert die Interaktionsmöglichkeit und passt sich automatisch an die Bedürfnisse und Fähigkeiten des Anwenders. Ein wichtiger Schritt zur Realisierung von adaptiven Systeme, ist die automatische Erkennung der Benutzerfähigkeiten, um eine Anpassung der Benutzungsschnittstelle an den Benutzer vornehmen zu können. Das Ziel dieser Bachelorarbeit ist es, festzustellen, ob bzw. wie sich die Analyse der Augenbewegung (Eye-Tracking) dazu eignet, die Fähigkeiten des Anwenders bezüglich Verständnis und Expertise anhand des jeweiligen Blickverhaltens zu erkennen, um diese Information für eine adaptive Benutzungsschnittstelle verwenden zu können. In dieser Arbeit werden Experimente zur Erkennung von Benutzerfähigkeiten anhand der Blickdaten analysiert und Erkenntnisse für eine adaptive Benutzerschnittstelle ermittelt. Die Ergebnisse der Studien zeigen, dass keine Unterschiede zwischen Benutzern bezüglich der Augenbewegungsdaten erkannt werden.Studies showed that completing a task with a computer depends on the perception of the user. The communication between user and computer systems requires high demands to the user interface, which is responsible for interaction between users and software. An adaptive user interface simplifies and improves the interactions and automatically adapts to the needs and abilities of the user. An important step towards the realization of such adaptive systems is the automatic recognition of the user skills to adapt the user interface to the user. The aim of this thesis is to determine whether and how the analysis of eye movements (Eye-Tracking) can be used, to recognize the skills of the user with respect to comprehension and expertise based on the respective eye gaze to use this information for an adaptive user interface. In this work experiments for the detection of user skills based on the gaze data are analyzed and findings for an adaptive user interface are determined. The results of the studies show, that there are no differences between users with respect to the eye movement data

    Combining Quantitative Eye-Tracking and GIS Techniques With Qualitative Research Methods to Evaluate the Effectiveness of 2D and Static, 3D Karst Visualizations: Seeing Through the Complexities of Karst Environments

    Get PDF
    Karst environments are interconnected landscapes vulnerable to degradation. Many instances of anthropogenic karst disturbance are unintentional, and occur because of the public\u27s lack of understanding or exposure to karst knowledge. When attempts are made to educate the general public about these landscapes, the concepts taught are often too abstract to be fully understood. Thus, karst educational pursuits must use only the most efficient and effective learning materials. A technique useful for assessing educational effectiveness of learning materials is eye-tracking, which allows scientists to quantitatively measure an individual\u27s points of interest and eye movements when viewing a 2D or 3D visualization. Visualization developers use eye-tracking data to create graphics that hold the observer\u27s attention and, thereby, enhance learning about a particular concept. This study aimed to assess and improve the educational effectiveness of 2D karst visualizations by combining eye-tracking techniques with Geographic Information Systems, knowledge assessments, and semi-structured interviews. The first phase of this study consisted of groups of 10 participants viewing 2D karst visualizations with one category of manipulated visual stimuli. The second phase consisted of groups of 10-15 participants viewing 2D karst visualizations that were created based on the results from the first phase. The results of this study highlighted both effective stimuli in karst visualizations and stimuli that hinder the educational effectiveness of visualizations

    Assessment of Visual Literacy – Contributions of Eye Tracking

    Get PDF
    Visual Literacy (VL) is defined as a set of competencies to understand and express oneself through visual imagery. An expansive model, the Common European Framework of Reference for Visual Literacy (CEFR-VL) (Wagner & Schönau, 2016), comprises 16 sub-competencies, including abilities such as analyzing, judging, experimenting with or aesthetically experiencing images. To empirically assess VL sub-competencies different visual tasks were presented to VL experts and novices. Problem-solving behavior and cognitive strategies involved in visual logical reasoning (Paper 1), Visual Search (Paper 2), and judgments of visual abstraction (Paper 3) were investigated. Eye tracking in combination with innovative statistical methods were used to uncover latent variables during task performance and to assess the possible effects of differences in expertise level. Furthermore, the relationship between students' self-reported visual abilities and their performance on VL assessment tasks is systematically explored. Results show how effects of perceptual skills of VL experts are less pronounced and more nuanced than implied by VL models. The comprehension of visual logical models does not seem to depend much on VL, as experts and novices did not differ in their solution strategies and eye movement indicators (Paper 1). In contrast, the visual search task on artworks revealed how experts were able to detect target regions with higher efficiency than novices revealed by higher precision of fixations on target regions. Furthermore, latent image features were detected by experts with more certainty (Paper 2). The assessment of perceived level of visual abstraction revealed how, contrary to our expectations, experts did not outperform novices but despite that were able to detect nuanced level of abstraction compared to student groups. Distribution of fixations indicate how attention is directed towards more ambiguous images (Paper 3). Students can be classified based on different levels of visual logical comprehension (Paper 1), on self-reported visual skills, and the time spent on the tasks (Paper 2, Paper 3). Self-reported visual art abilities of students (e.g., imagination) influences the visual search and the judgment of visual abstraction. Taken together the results show how VL skills are not determined solely by the number of correct responses, but rather by how visual tasks are solved and deconstructed; for example, experts are able to focus on less salient image regions during visual search and demonstrate a more nuanced interpretation of visual abstraction. Low-level perceptual abilities of experts and novices differ marginally, which is consistent with research on art expertise. Assessment of VL remains challenging, but new empirical methods are proposed to uncover the underlying components of VL
    corecore