8,618 research outputs found

    Application of mathematical and machine learning techniques to analyse eye tracking data enabling better understanding of children’s visual cognitive behaviours

    Get PDF
    In this research, we aimed to investigate the visual-cognitive behaviours of a sample of 106 children in Year 3 (8.8 ± 0.3 years) while completing a mathematics bar-graph task. Eye movements were recorded while children completed the task and the patterns of eye movements were explored using machine learning approaches. Two different techniques of machine-learning were used (Bayesian and K-Means) to obtain separate model sequences or average scanpaths for those children who responded either correctly or incorrectly to the graph task. Application of these machine-learning approaches indicated distinct differences in the resulting scanpaths for children who completed the graph task correctly or incorrectly: children who responded correctly accessed information that was mostly categorised as critical, whereas children responding incorrectly did not. There was also evidence that the children who were correct accessed the graph information in a different, more logical order, compared to the children who were incorrect. The visual behaviours aligned with different aspects of graph comprehension, such as initial understanding and orienting to the graph, and later interpretation and use of relevant information on the graph. The findings are discussed in terms of the implications for early mathematics teaching and learning, particularly in the development of graph comprehension, as well as the application of machine learning techniques to investigations of other visual-cognitive behaviours.Peer reviewe

    Predicting ADHD Using Eye Gaze Metrics Indexing Working Memory Capacity

    Get PDF
    ADHD is being recognized as a diagnosis that persists into adulthood impacting educational and economic outcomes. There is an increased need to accurately diagnose this population through the development of reliable and valid outcome measures reflecting core diagnostic criteria. For example, adults with ADHD have reduced working memory capacity (WMC) when compared to their peers. A reduction in WMC indicates attention control deficits which align with many symptoms outlined on behavioral checklists used to diagnose ADHD. Using computational methods, such as machine learning, to generate a relationship between ADHD and measures of WMC would be useful to advancing our understanding and treatment of ADHD in adults. This chapter will outline a feasibility study in which eye tracking was used to measure eye gaze metrics during a WMC task for adults with and without ADHD and machine learning algorithms were applied to generate a feature set unique to the ADHD diagnosis. The chapter will summarize the purpose, methods, results, and impact of this study

    Social Visual Behavior Analytics for Autism Therapy of Children Based on Automated Mutual Gaze Detection

    Full text link
    Social visual behavior, as a type of non-verbal communication, plays a central role in studying social cognitive processes in interactive and complex settings of autism therapy interventions. However, for social visual behavior analytics in children with autism, it is challenging to collect gaze data manually and evaluate them because it costs a lot of time and effort for human coders. In this paper, we introduce a social visual behavior analytics approach by quantifying the mutual gaze performance of children receiving play-based autism interventions using an automated mutual gaze detection framework. Our analysis is based on a video dataset that captures and records social interactions between children with autism and their therapy trainers (N=28 observations, 84 video clips, 21 Hrs duration). The effectiveness of our framework was evaluated by comparing the mutual gaze ratio derived from the mutual gaze detection framework with the human-coded ratio values. We analyzed the mutual gaze frequency and duration across different therapy settings, activities, and sessions. We created mutual gaze-related measures for social visual behavior score prediction using multiple machine learning-based regression models. The results show that our method provides mutual gaze measures that reliably represent (or even replace) the human coders' hand-coded social gaze measures and effectively evaluates and predicts ASD children's social visual performance during the intervention. Our findings have implications for social interaction analysis in small-group behavior assessments in numerous co-located settings in (special) education and in the workplace.Comment: Accepted to IEEE/ACM international conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE) 202

    A Framework for Students Profile Detection

    Get PDF
    Some of the biggest problems tackling Higher Education Institutions are students’ drop-out and academic disengagement. Physical or psychological disabilities, social-economic or academic marginalization, and emotional and affective problems, are some of the factors that can lead to it. This problematic is worsened by the shortage of educational resources, that can bridge the communication gap between the faculty staff and the affective needs of these students. This dissertation focus in the development of a framework, capable of collecting analytic data, from an array of emotions, affects and behaviours, acquired either by human observations, like a teacher in a classroom or a psychologist, or by electronic sensors and automatic analysis software, such as eye tracking devices, emotion detection through facial expression recognition software, automatic gait and posture detection, and others. The framework establishes the guidance to compile the gathered data in an ontology, to enable the extraction of patterns outliers via machine learning, which assist the profiling of students in critical situations, like disengagement, attention deficit, drop-out, and other sociological issues. Consequently, it is possible to set real-time alerts when these profiles conditions are detected, so that appropriate experts could verify the situation and employ effective procedures. The goal is that, by providing insightful real-time cognitive data and facilitating the profiling of the students’ problems, a faster personalized response to help the student is enabled, allowing academic performance improvements

    Eye Movement Patterns in Solving Science Ordering Problems

    Get PDF
    Dynamic biological processes, such as intracellular signaling pathways, commonly are taught in science courses using static representations of individual steps in the pathway. As a result, students often memorize these steps for examination purposes, but fail to appreciate either the cascade nature of the pathway. In this study, we compared eye movement patterns for students who correctly ordered the components of an important pathway responsible for vasoconstriction against those who did not. Similarly, we compared the patterns of students who learned the material using three dimensional (3-D) animations previously associated with improved student understanding of this pathway against those who learned the material using static images extracted from those animations. For two of the three ordering problems, students with higher scores had shorter total fixation duration when ordering the components and spent less time (fixating) in the planning and solving phases of the problem-solving process. This finding was supported by the scanpath patterns that demonstrated that students who correctly solved the problems used more efficient problem-solving strategies

    MIDAS: Deep learning human action intention prediction from natural eye movement patterns

    Get PDF
    Eye movements have long been studied as a window into the attentional mechanisms of the human brain and made accessible as novelty style human-machine interfaces. However, not everything that we gaze upon, is something we want to interact with; this is known as the Midas Touch problem for gaze interfaces. To overcome the Midas Touch problem, present interfaces tend not to rely on natural gaze cues, but rather use dwell time or gaze gestures. Here we present an entirely data-driven approach to decode human intention for object manipulation tasks based solely on natural gaze cues. We run data collection experiments where 16 participants are given manipulation and inspection tasks to be performed on various objects on a table in front of them. The subjects' eye movements are recorded using wearable eye-trackers allowing the participants to freely move their head and gaze upon the scene. We use our Semantic Fovea, a convolutional neural network model to obtain the objects in the scene and their relation to gaze traces at every frame. We then evaluate the data and examine several ways to model the classification task for intention prediction. Our evaluation shows that intention prediction is not a naive result of the data, but rather relies on non-linear temporal processing of gaze cues. We model the task as a time series classification problem and design a bidirectional Long-Short-Term-Memory (LSTM) network architecture to decode intentions. Our results show that we can decode human intention of motion purely from natural gaze cues and object relative position, with 91.9%91.9\% accuracy. Our work demonstrates the feasibility of natural gaze as a Zero-UI interface for human-machine interaction, i.e., users will only need to act naturally, and do not need to interact with the interface itself or deviate from their natural eye movement patterns
    • …
    corecore