7,519 research outputs found

    A study on eye fixation patterns of students in higher education using an online learning system

    Get PDF
    We study how the use of online learning systems stimulate cognitive activities, by conducting an experiment with the use of eye tracking technology to monitor eye fixations of 60 final year students engaging in online interactive tutorials at the start of their Final Year Project module. Our findings show that the students' visual scanning behaviours fall into three different types of eye fixation patterns, and the data corresponding to the different types relates to the performance of the students in other related academic modules. We conclude that this method of studying eye fixation patterns can identify different types of learners with respect to cognitive activities and academic potentials, allowing educators to understand how their instructional design using online learning environments can stimulate higher-order cognitive activities

    Efficient Ultrasound Image Analysis Models with Sonographer Gaze Assisted Distillation.

    Get PDF
    Recent automated medical image analysis methods have attained state-of-the-art performance but have relied on memory and compute-intensive deep learning models. Reducing model size without significant loss in performance metrics is crucial for time and memory-efficient automated image-based decision-making. Traditional deep learning based image analysis only uses expert knowledge in the form of manual annotations. Recently, there has been interest in introducing other forms of expert knowledge into deep learning architecture design. This is the approach considered in the paper where we propose to combine ultrasound video with point-of-gaze tracked for expert sonographers as they scan to train memory-efficient ultrasound image analysis models. Specifically we develop teacher-student knowledge transfer models for the exemplar task of frame classification for the fetal abdomen, head, and femur. The best performing memory-efficient models attain performance within 5% of conventional models that are 1000× larger in size

    Pervasive Displays Research: What's Next?

    Get PDF
    Reports on the 7th ACM International Symposium on Pervasive Displays that took place from June 6-8 in Munich, Germany

    Regulating distance to the screen while engaging in difficult tasks

    Get PDF
    Regulation of distance to the screen (i.e., head-to-screen distance, fluctuation of head-to-screen distance) has been proved to reflect the cognitive engagement of the reader. However, it is still not clear (a) whether regulation of distance to the screen can be a potential parameter to infer high cognitive load and (b) whether it can predict the upcoming answer accuracy. Configuring tablets or other learning devices in a way that distance to the screen can be analyzed by the learning software is in close reach. The software might use the measure as a person-specific indicator of need for extra scaffolding. In order to better gauge this potential, we analyzed eye-tracking data of children (N = 144, Mage = 13 years, SD = 3.2 years) engaging in multimedia learning, as distance to the screen is estimated as a by-product of eye tracking. Children were told to maintain a still seated posture while reading and answering questions at three difficulty levels (i.e., easy vs. medium vs. difficult). Results yielded that task difficulty influences how well the distance to the screen can be regulated, supporting that regulation of distance to the screen is a promising measure. Closer head-to-screen distance and larger fluctuation of head-to-screen distance can reflect that participants are engaging in a challenging task. Only large fluctuation of head-to-screen distance can predict the future incorrect answers. The link between distance to the screen and processing of cognitive task can obtrusively embody reader’s cognitive states during system usage, which can support adaptive learning and testing
    corecore