1,643 research outputs found

    Eye Tracking to Support eLearning

    No full text
    Online eLearning environments to support student learning are of growing importance. Students are increasingly turning to online resources for education; sometimes in place of face-to-face tuition. Online eLearning extends teaching and learning from the classroom to a wider audience with different needs, backgrounds, and motivations. The one-size-fits-all approach predominately used is not effective for catering to the needs of all students. An area of the increasing diversity is the linguistic background of readers. More students are reading in their non-native language. It has previously been established that first English language (L1) students read differently to second English language (L2) students. One way of analysing this difference is by tracking the eyes of readers, which is an effective way of investigating the reading process. In this thesis we investigate the question of whether eye tracking can be used to make learning via reading more effective in eLearning environments. This question is approached from two directions; first by investigating how eye tracking can be used to adapt to individual student’s understanding and perceptions of text. The second approach is analysing a cohort’s reading behaviour to provide information to the author of the text and any related comprehension questions regarding their suitability and difficulty. To investigate these questions, two user studies were carried out to collect eye gaze data from both L1 and L2 readers. The first user study focussed on how different presentation methods of text and related questions affected not only comprehension performance but also reading behaviour and student perceptions of performance. The data from this study was used to make predictions of reading comprehension that can be used to make eLearning environments adaptive, in addition to providing implicit feedback about the difficulty of text and questions. In the second study we investigate the effects of text readability and conceptual difficulty on eye gaze, prediction of reading comprehension, and perceptions. This study showed that readability affected the eye gaze of L1 readers and conceptual difficulty affected the eye gaze of L2 readers. The prediction accuracy of comprehension was consequently increased for the L1 group by increased difficulty in readability, whereas increased difficulty in conceptual level corresponded to increased accuracy for the L2 group. Analysis of participants’ perceptions of complexity revealed that readability and conceptual difficulty interact making the two variables hard for the reader to disentangle. Further analysis of participants’ eye gaze revealed that both the predefined and perceived text complexity affected eye gaze. We therefore propose using eye gaze measures to provide feedback about the implicit reading difficulty of texts read. The results from both studies indicate that there is enormous potential in using eye tracking to make learning via reading more effective in eLearning environments. We conclude with a discussion of how these findings can be applied to improve reading within eLearning environments. We propose an adaptive eLearning architecture that dynamically presents text to students and provides information to authors to improve the quality of texts and questions

    Can humans help BERT gain "confidence"?

    Full text link
    The advancements in artificial intelligence over the last decade have opened a multitude of avenues for interdisciplinary research. Since the idea of artificial intelligence was inspired by the working of neurons in the brain, it seems pretty practical to combine the two fields and take the help of cognitive data to train AI models. Not only it will help to get a deeper understanding of the technology, but of the brain as well. In this thesis, I conduct novel experiments to integrate cognitive features from the Zurich Cognitive Corpus (ZuCo) (Hollenstein et al., 2018) with a transformer-based encoder model called BERT. I show how EEG and eye-tracking features from ZuCo can help to increase the performance of the NLP model. I confirm the performance increase with the help of a robustness-checking pipeline and derive a word-EEG lexicon to use in benchmarking on an external dataset that does not have any cognitive features associated with it. Further, I analyze the internal working mechanism of BERT and explore a potential method for model explainability by correlating it with a popular model-agnostic explainability framework called LIME (Ribeiro et al., 2016). Finally, I discuss the possible directions to take this research forward.Comment: Masters thesi

    Near real-time comprehension classification with artificial neural networks: decoding e-Learner non-verbal behaviour

    Get PDF
    Comprehension is an important cognitive state for learning. Human tutors recognise comprehension and non-comprehension states by interpreting learner non-verbal behaviour (NVB). Experienced tutors adapt pedagogy, materials and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time assessment for e-learner comprehension of on-screen information could provide a powerful tool for both adaptation within intelligent e-learning platforms and appraisal of tutorial content for learning analytics. However, literature suggests that no existing method for automatic classification of learner comprehension by analysis of NVB can provide a practical solution in an e-learning, on-screen, context. This paper presents design, development and evaluation of COMPASS, a novel near real-time comprehension classification system for use in detecting learner comprehension of on-screen information during e-learning activities. COMPASS uses a novel descriptive analysis of learner behaviour, image processing techniques and artificial neural networks to model and classify authentic comprehension indicative non-verbal behaviour. This paper presents a study in which 44 undergraduate students answered on-screen multiple choice questions relating to computer programming. Using a front-facing USB web camera the behaviour of the learner is recorded during reading and appraisal of on-screen information. The resultant dataset of non-verbal behaviour and question-answer scores has been used to train artificial neural network (ANN) to classify comprehension and non-comprehension states in near real-time. The trained comprehension classifier achieved normalised classification accuracy of 75.8%

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    How does rumination impact cognition? A first mechanistic model.

    Get PDF

    How does rumination impact cognition? A first mechanistic model.

    Get PDF
    Rumination is a process of uncontrolled, narrowly-foused neg- ative thinking that is often self-referential, and that is a hall- mark of depression. Despite its importance, little is known about its cognitive mechanisms. Rumination can be thought of as a specific, constrained form of mind-wandering. Here, we introduce a cognitive model of rumination that we devel- oped on the basis of our existing model of mind-wandering. The rumination model implements the hypothesis that rumina- tion is caused by maladaptive habits of thought. These habits of thought are modelled by adjusting the number of memory chunks and their associative structure, which changes the se- quence of memories that are retrieved during mind-wandering, such that during rumination the same set of negative memo- ries is retrieved repeatedly. The implementation of habits of thought was guided by empirical data from an experience sam- pling study in healthy and depressed participants. On the ba- sis of this empirically-derived memory structure, our model naturally predicts the declines in cognitive task performance that are typically observed in depressed patients. This study demonstrates how we can use cognitive models to better un- derstand the cognitive mechanisms underlying rumination and depression

    Multi-modal post-editing of machine translation

    Get PDF
    As MT quality continues to improve, more and more translators switch from traditional translation from scratch to PE of MT output, which has been shown to save time and reduce errors. Instead of mainly generating text, translators are now asked to correct errors within otherwise helpful translation proposals, where repetitive MT errors make the process tiresome, while hard-to-spot errors make PE a cognitively demanding activity. Our contribution is three-fold: first, we explore whether interaction modalities other than mouse and keyboard could well support PE by creating and testing the MMPE translation environment. MMPE allows translators to cross out or hand-write text, drag and drop words for reordering, use spoken commands or hand gestures to manipulate text, or to combine any of these input modalities. Second, our interviews revealed that translators see value in automatically receiving additional translation support when a high CL is detected during PE. We therefore developed a sensor framework using a wide range of physiological and behavioral data to estimate perceived CL and tested it in three studies, showing that multi-modal, eye, heart, and skin measures can be used to make translation environments cognition-aware. Third, we present two multi-encoder Transformer architectures for APE and discuss how these can adapt MT output to a domain and thereby avoid correcting repetitive MT errors.Angesichts der stetig steigenden Qualität maschineller Übersetzungssysteme (MÜ) post-editieren (PE) immer mehr Übersetzer die MÜ-Ausgabe, was im Vergleich zur herkömmlichen Übersetzung Zeit spart und Fehler reduziert. Anstatt primär Text zu generieren, müssen Übersetzer nun Fehler in ansonsten hilfreichen Übersetzungsvorschlägen korrigieren. Dennoch bleibt die Arbeit durch wiederkehrende MÜ-Fehler mühsam und schwer zu erkennende Fehler fordern die Übersetzer kognitiv. Wir tragen auf drei Ebenen zur Verbesserung des PE bei: Erstens untersuchen wir, ob andere Interaktionsmodalitäten als Maus und Tastatur das PE unterstützen können, indem wir die Übersetzungsumgebung MMPE entwickeln und testen. MMPE ermöglicht es, Text handschriftlich, per Sprache oder über Handgesten zu verändern, Wörter per Drag & Drop neu anzuordnen oder all diese Eingabemodalitäten zu kombinieren. Zweitens stellen wir ein Sensor-Framework vor, das eine Vielzahl physiologischer und verhaltensbezogener Messwerte verwendet, um die kognitive Last (KL) abzuschätzen. In drei Studien konnten wir zeigen, dass multimodale Messung von Augen-, Herz- und Hautmerkmalen verwendet werden kann, um Übersetzungsumgebungen an die KL der Übersetzer anzupassen. Drittens stellen wir zwei Multi-Encoder-Transformer-Architekturen für das automatische Post-Editieren (APE) vor und erörtern, wie diese die MÜ-Ausgabe an eine Domäne anpassen und dadurch die Korrektur von sich wiederholenden MÜ-Fehlern vermeiden können.Deutsche Forschungsgemeinschaft (DFG), Projekt MMP
    • …
    corecore