5,233 research outputs found

    Gaze Assisted Prediction of Task Difficulty Level and User Activities in an Intelligent Tutoring System (ITS)

    Get PDF
    Efforts toward modernizing education are emphasizing the adoption of Intelligent Tutoring Systems (ITS) to complement conventional teaching methodologies. Intelligent tutoring systems empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students’ progress. Despite the advantages, existing intelligent tutoring systems do not automatically assess how students engage in problem solving? How do they perceive various activities, while solving a problem? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that can assess how eye movements manifest students’ perceived activities and overall engagement in a sketch based Intelligent tutoring system, “Mechanix.” Mechanix guides students in solving truss problems by supporting user initiated feedback. Through an evaluation involving 21 participants, we show the potential of leveraging eye movement data to recognize students’ perceived activities, “reading, gazing at an image, and problem solving,” with an accuracy of 97.12%. We are also able to leverage the user gaze data to classify problems being solved by students as difficult, medium, or hard with an accuracy of more than 80%. In this process, we also identify the key features of eye movement data, and discuss how and why these features vary across different activities

    Gaze Assisted Prediction of Task Difficulty Level and User Activities in an Intelligent Tutoring System (ITS)

    Get PDF
    Efforts toward modernizing education are emphasizing the adoption of Intelligent Tutoring Systems (ITS) to complement conventional teaching methodologies. Intelligent tutoring systems empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students’ progress. Despite the advantages, existing intelligent tutoring systems do not automatically assess how students engage in problem solving? How do they perceive various activities, while solving a problem? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that can assess how eye movements manifest students’ perceived activities and overall engagement in a sketch based Intelligent tutoring system, “Mechanix.” Mechanix guides students in solving truss problems by supporting user initiated feedback. Through an evaluation involving 21 participants, we show the potential of leveraging eye movement data to recognize students’ perceived activities, “reading, gazing at an image, and problem solving,” with an accuracy of 97.12%. We are also able to leverage the user gaze data to classify problems being solved by students as difficult, medium, or hard with an accuracy of more than 80%. In this process, we also identify the key features of eye movement data, and discuss how and why these features vary across different activities

    iMind: Uma ferramenta inteligente para suporte de compreensĂŁo de conteĂşdo

    Get PDF
    Usually while reading, content comprehension difficulty affects individual performance. Comprehension difficulties, e. g., could lead to a slow learning process, lower work quality, and inefficient decision-making. This thesis introduces an intelligent tool called “iMind” which uses wearable devices (e.g., smartwatches) to evaluate user comprehension difficulties and engagement levels while reading digital content. Comprehension difficulty can occur when there are not enough mental resources available for mental processing. The mental resource for mental processing is the cognitive load (CL). Fluctuations of CL lead to physiological manifestation of the autonomic nervous system (ANS), which can be measured by wearables, like smartwatches. ANS manifestations are, e. g., an increase in heart rate. With low-cost eye trackers, it is possible to correlate content regions to the measurements of ANS manifestation. In this sense, iMind uses a smartwatch and an eye tracker to identify comprehension difficulty at content regions level (where the user is looking). The tool uses machine learning techniques to classify content regions as difficult or non-difficult based on biometric and non-biometric features. The tool classified regions with a 75% accuracy and 80% f-score with Linear regression (LR). With the classified regions, it will be possible, in the future, to create contextual support for the reader in real-time by, e.g., translating the sentences that induced comprehension difficulty.Normalmente durante a leitura, a dificuldade de compreensão pode afetar o desempenho da leitura. A dificuldade de compreensão pode levar a um processo de aprendizagem mais lento, menor qualidade de trabalho ou uma ineficiente tomada de decisão. Esta tese apresenta uma ferramenta inteligente chamada “iMind” que usa dispositivos vestíveis (por exemplo, smartwatches) para avaliar a dificuldade de compreensão do utilizador durante a leitura de conteúdo digital. A dificuldade de compreensão pode ocorrer quando não há recursos mentais disponíveis suficientes para o processamento mental. O recurso usado para o processamento mental é a carga cognitiva (CL). As flutuações de CL levam a manifestações fisiológicas do sistema nervoso autônomo (ANS), manifestações essas, que pode ser medido por dispositivos vestíveis, como smartwatches. As manifestações do ANS são, por exemplo, um aumento da frequência cardíaca. Com eye trackers de baixo custo, é possível correlacionar manifestação do ANS com regiões do texto, por exemplo. Neste sentido, a ferramenta iMind utiliza um smartwatch e um eye tracker para identificar dificuldades de compreensão em regiões de conteúdo (para onde o utilizador está a olhar). Adicionalmente a ferramenta usa técnicas de machine learning para classificar regiões de conteúdo como difíceis ou não difíceis com base em features biométricos e não biométricos. A ferramenta classificou regiões com uma precisão de 75% e f-score de 80% usando regressão linear (LR). Com a classificação das regiões em tempo real, será possível, no futuro, criar suporte contextual para o leitor em tempo real onde, por exemplo, as frases que induzem dificuldade de compreensão são traduzidas

    Deaf and Hard of Hearing Readers and Science Comics: A Mixed Methods Investigation on Process

    Get PDF
    Deaf and hard of hearing (DHH) students bring diverse language and literacy backgrounds to the task of academic reading, which becomes increasingly complex and abstract in the upper grades. Teachers often differentiate their instruction by providing multimedia resources, of which students interact with verbal and pictorial information. A growing body of research supports multimedia learning; however, most of the studies have focused exclusively on learning outcomes, leaving teachers in the dark about the cognitive processes underlying these effects. This mixed methods study addresses this gap by using a nonfiction comic to investigate the reading processes of DHH 7th -12th grade students. Eye tracking and cued retrospective protocol were employed in a concurrent nested design to answer the question, how do DHH students read and learn from multimedia science texts? This study was guided by the cognitive theory of multimedia which states that reading comprehension is better supported when learning from words and pictures rather than words alone, especially when readers cognitively integrate the two representations to form a coherent mental model. Temporal and sequential eye tracking results revealed that readers’ transitions between related words and pictures were a statistically significant variable in explaining factual knowledge learning outcomes. These strategic shifts in attention were further explained by readers’ retrospective verbal reports of their thinking. Students’ descriptions of their vii reading processes were interpreted into the following themes: repairing, connecting representations, passive transitions, and connecting to self. The integration of quantitative and qualitative methods at the interpretation stage revealed that although the theme of repairing was equally distributed across all student reports, the theme of connecting representations was largely present in the reports from students who made high counts of integrative transitions. The major findings of this study align with the cognitive theory of multimedia, that students’ learning outcomes were significantly predicted by the deliberate strategies to cognitively integrate words and pictures to form and maintain a coherent mental model. The discussion includes ways in which teachers can capitalize on explicit modeling of these behaviors and employ students’ “think alouds” to better understand and support the development of effective multimedia reading processes

    The impact of emotional tone during shared reading experiences

    Get PDF
    Shared book reading is a pleasurable and educational activity between an adult and a child. The Ministry of Education of Ontario encourages parents to read aloud to their children in order to foster a love of reading and aid in the development of literacy skills. One Ministry recommendation is for the adult to “make it exciting: put some drama into your voice” (Ontario Ministry of Education, n.d.). The current study examined the effect of different emotional tones of voice on the eye movements of children during shared book reading and on the children’s comprehension for the material. Four storybooks were each recorded in four emotional tones of voice (neutral, angry, happy, and character), and presented on a computer screen. Each child was presented with each of the storybooks in one of the four conditions, with the combination of book and emotion randomized across participants. Eye-tracking technology followed each child’s eye movements during the presentation. The children were asked three comprehension questions related to each story immediately after, and two further questions at the end of all four presentations. Standardized tests of working memory capacity and basic academic skills were also administered to the children. Analyses showed that while the proportion of time the children spent on the text did not vary as a function of emotional tone, they made the most fixations and spent the most time on the text and images in the angry condition. Furthermore, the children made the fewest fixations and spent the least amount of time looking at the text and images in the happy condition. The children also scored the highest in regard to the comprehension questions in the angry condition. Finally, there were positive relationships between the children’s early reading skills and auditory working memory as per standardized tests, and their performance during the shared book reading and comprehension tasks.Master's Thesi

    Investigating Visual Perception Impairments through Serious Games and Eye Tracking to Anticipate Handwriting Difficulties

    Get PDF
    Dysgraphia is a learning disability that causes handwritten production below expectations. Its diagnosis is delayed until the completion of handwriting development. To allow a preventive training program, abilities not directly related to handwriting should be evaluated, and one of them is visual perception. To investigate the role of visual perception in handwriting skills, we gamified standard clinical visual perception tests to be played while wearing an eye tracker at three difficulty levels. Then, we identified children at risk of dysgraphia through the means of a handwriting speed test. Five machine learning models were constructed to predict if the child was at risk, using the CatBoost algorithm with Nested Cross-Validation, with combinations of game performance, eye-tracking, and drawing data as predictors. A total of 53 children participated in the study. The machine learning models obtained good results, particularly with game performances as predictors (F1 score: 0.77 train, 0.71 test). SHAP explainer was used to identify the most impactful features. The game reached an excellent usability score (89.4 +/- 9.6). These results are promising to suggest a new tool for dysgraphia early screening based on visual perception skills

    Investigating priming, inhibition, and individual differences in visual attention

    Get PDF
    While much has been explored within the attentional control literature, questions still exist as to how attentional processing is modulated, and how different types of visual search paradigms can elucidate the underlying mechanisms involved in successful visual search. Throughout this dissertation, I will focus on the multifaceted aspects that come with the study of visual attention. After discussing visual attention I explore priming of pop out along two different dimensions. Specifically, using a rapid serial visual presentation design, I demonstrate that temporal and spatial priming interact along a similar mechanism. This result adds to the priming literature by demonstrating simultaneous multidimensional priming in our ability to efficiently process our visual environment. Next, I explore attentional distraction and psychophysical thresholds to examine whether an individual\u27s sensitivity to a visual feature can predict the individual\u27s magnitude of distraction by that feature. Results reveal that psychophysical thresholds are not sensitive enough to reflect a definite relationship between an individual\u27s baseline stimulus-driven sensitivity to visual features and the magnitude of distraction by those features. Finally, I explore the role of inhibition (using a stop signal paradigm) in individual differences in abilities to avoid distraction, and examine how working memory capacity influences target selection. Results failed to elucidate this relationship and further research is needed to uncover whether individual differences in avoiding distraction are subserved by either inhibitory processing, or working memory capacity. In conclusion, this dissertation uses various visual search paradigms to explore the interactions of stimulus-driven and goal-driven effects, to illuminate how individual differences inform models of attentional distraction, and to investigate how inhibiting a distractor modulates attentional processing

    Video Collaboration: Copresence and Performance

    Get PDF
    The purpose of this qualitative narrative theory study on video collaboration platform use is to explain how an individual\u27s on-screen performance and their interpersonal verbal and nonverbal communication contributes to engagement and copresence with their audience. The literature review analyzes critical interpersonal communication theories to explain how this affects engagement and copresence levels in mediated virtual environments. The research was conducted through interviews with thirty professional businesspeople about their video collaboration experiences during the COVID-19 2020 shutdown. The interview respondents told the stories of business communication successes and failures that correspond to the scholarly theories in the literature review. The respondents discussed how verbal and nonverbal communication was used successfully and unsuccessfully. They also discussed why their companies found it challenging to communicate virtually during the COVID-19 shutdown with video collaboration. A final discussion analyzes how communication theory and practical experience combined to explain how verbal and nonverbal communication impact mediated virtual communications when using video collaboration. This study offers a model to help explain how interpersonal communication, engagement, and copresence exist in a cyclical motion. This model can be helpful to business people and scholars to communicate in a mediated virtual environment using video collaboration platforms

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    • …
    corecore