6 research outputs found

    Pupil responses during discrete goal-directed movements

    Full text link

    Gaze Assisted Prediction of Task Difficulty Level and User Activities in an Intelligent Tutoring System (ITS)

    Get PDF
    Efforts toward modernizing education are emphasizing the adoption of Intelligent Tutoring Systems (ITS) to complement conventional teaching methodologies. Intelligent tutoring systems empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students’ progress. Despite the advantages, existing intelligent tutoring systems do not automatically assess how students engage in problem solving? How do they perceive various activities, while solving a problem? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that can assess how eye movements manifest students’ perceived activities and overall engagement in a sketch based Intelligent tutoring system, “Mechanix.” Mechanix guides students in solving truss problems by supporting user initiated feedback. Through an evaluation involving 21 participants, we show the potential of leveraging eye movement data to recognize students’ perceived activities, “reading, gazing at an image, and problem solving,” with an accuracy of 97.12%. We are also able to leverage the user gaze data to classify problems being solved by students as difficult, medium, or hard with an accuracy of more than 80%. In this process, we also identify the key features of eye movement data, and discuss how and why these features vary across different activities

    Gaze Assisted Prediction of Task Difficulty Level and User Activities in an Intelligent Tutoring System (ITS)

    Get PDF
    Efforts toward modernizing education are emphasizing the adoption of Intelligent Tutoring Systems (ITS) to complement conventional teaching methodologies. Intelligent tutoring systems empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students’ progress. Despite the advantages, existing intelligent tutoring systems do not automatically assess how students engage in problem solving? How do they perceive various activities, while solving a problem? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that can assess how eye movements manifest students’ perceived activities and overall engagement in a sketch based Intelligent tutoring system, “Mechanix.” Mechanix guides students in solving truss problems by supporting user initiated feedback. Through an evaluation involving 21 participants, we show the potential of leveraging eye movement data to recognize students’ perceived activities, “reading, gazing at an image, and problem solving,” with an accuracy of 97.12%. We are also able to leverage the user gaze data to classify problems being solved by students as difficult, medium, or hard with an accuracy of more than 80%. In this process, we also identify the key features of eye movement data, and discuss how and why these features vary across different activities

    Toward a Real-Time Index of Pupillary Activity as an Indicator of Cognitive Load

    Get PDF
    The Low/High Index of Pupillary Activity (LHIPA), an eye-tracked measure of pupil diameter oscillation, is redesigned and implemented to function in real-time. The novel Real-time IPA (RIPA) is shown to discriminate cognitive load in re-streamed data from earlier experiments. Rationale for the RIPA is tied to the functioning of the human autonomic nervous system yielding a hybrid measure based on the ratio of Low/High frequencies of pupil oscillation. The paper\u27s contribution is drawn from provision of documentation of the calculation of the RIPA. As with the LHIPA, it is possible for researchers to apply this metric to their own experiments where a measure of cognitive load is of interest

    Users’ Cognitive Load: A Key Aspect to Successfully Communicate Visual Climate Information

    Get PDF
    The visual communication of climate information is one of the cornerstones of climate services. It often requires the translation of multidimensional data to visual channels by combining colors, distances, angles, and glyph sizes. However, visualizations including too many layers of complexity can hinder decision-making processes by limiting the cognitive capacity of users, therefore affecting their attention, recognition, and working memory. Methodologies grounded on the fields of user-centered design, user interaction, and cognitive psychology, which are based on the needs of the users, have a lot to contribute to the climate data visualization field. Here, we apply these methodologies to the redesign of an existing climate service tool tailored to the wind energy sector. We quantify the effect of the redesign on the users’ experience performing typical daily tasks, using both quantitative and qualitative indicators that include response time, success ratios, eye-tracking measures, user perceived effort, and comments, among others. Changes in the visual encoding of uncertainty and the use of interactive elements in the redesigned tool reduced the users’ response time by half, significantly improved success ratios, and eased decision-making by filtering nonrelevant information. Our results show that the application of user-centered design, interaction, and cognitive aspects to the design of climate information visualizations reduces the cognitive load of users during tasks performance, thus improving user experience. These aspects are key to successfully communicating climate information in a clearer and more accessible way, making it more understandable for both technical and nontechnical audiences.The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreements 776787 (S2S4E), 776613 (EUCP), and (ClimatEurope). This work was also supported by the MEDSCOPE project. MEDSCOPE is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by AEMET (ES), ANR (FR), BSC (ES), CMCC (IT), CNR (IT), IMR (BE), and Météo-France (FR), with co-funding by the European Union (Grant 690462). The research team would like to thank the participants of the test who generously shared their time and opinions for the purposes of this research. This study is a part of the PhD of the corresponding author, Luz Calvo.Peer ReviewedPostprint (published version

    RCEA: Real-time, Continuous Emotion Annotation for collecting precise mobile video ground truth labels

    Get PDF
    Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos
    corecore