5 research outputs found

    Displaying Teacher's Gaze in a MOOC: Effects on Students' Video Navigation Patterns

    Get PDF
    We present an eye-tracking study where we augment a Massive Open Online Course (MOOC) video with the gaze information of the teacher. We tracked the gaze of a teacher while he was recording the content for a MOOC lecture. Our working hypothesis is that displaying the gaze of the teacher will act as cues in crucial moments of dyadic conversation, the teacher-student dyad, such as reference disambiguation. We collected data about students' video interaction behaviour within a MOOC. The results show that the showing the teacher's gaze made the content easier to follow for the students even when complex visual stimulus present in the video lecture

    THE USE OF CONTEXTUAL CLUES IN REDUCING FALSE POSITIVES IN AN EFFICIENT VISION-BASED HEAD GESTURE RECOGNITION SYSTEM

    Get PDF
    This thesis explores the use of head gesture recognition as an intuitive interface for computer interaction. This research presents a novel vision-based head gesture recognition system which utilizes contextual clues to reduce false positives. The system is used as a computer interface for answering dialog boxes. This work seeks to validate similar research, but focuses on using more efficient techniques using everyday hardware. A survey of image processing techniques for recognizing and tracking facial features is presented along with a comparison of several methods for tracking and identifying gestures over time. The design explains an efficient reusable head gesture recognition system using efficient lightweight algorithms to minimize resource utilization. The research conducted consists of a comparison between the base gesture recognition system and an optimized system that uses contextual clues to reduce false positives. The results confirm that simple contextual clues can lead to a significant reduction of false positives. The head gesture recognition system achieves an overall accuracy of 96% using contextual clues and significantly reduces false positives. In addition, the results from a usability study are presented showing that head gesture recognition is considered an intuitive interface and desirable above conventional input for answering dialog boxes. By providing the detailed design and architecture of a head gesture recognition system using efficient techniques and simple hardware, this thesis demonstrates the feasibility of implementing head gesture recognition as an intuitive form of interaction using preexisting infrastructure, and also provides evidence that such a system is desirable

    Der verteilte Fahrerinteraktionsraum

    Get PDF
    Fahrrelevante und unterhaltungsbezogene Informationen werden, historisch betrachtet, räumlich getrennt im Fahrzeuginnenraum angeordnet: Für die Fahraufgabe notwendige Anzeigen befinden sich direkt vor dem Fahrer (Kombiinstrument und Head-Up Display) und Inhalte des Fahrerinformationssystems in der Mittelkonsole (zentrales Informationsdisplay). Aktuell ist eine Auflösung dieser strikten Trennung zu beobachten. Beispielsweise werden im Kombiinstrument Teilumfänge der Infotainmentinhalte abgerufen und bedient. Um dem Fahrer einen sicheren Umgang mit den zunehmenden Infotainmentinhalten zu ermöglichen, die Komplexität des Fahrerinteraktionsraumes zu reduzieren und den Kundennutzen zu steigern, betrachtet die vorliegende Arbeit die derzeit isolierten Displays ganzheitlich und lotet die Grenzen der momentan strikten Informationsverteilung neu aus. Es werden Grundlagen für die verkehrsgerechte Bedienung und Darstellung verteilter Informationen abhängig von deren Anzeigefläche gelegt, Konzepte zur nutzerinitiierten Individualisierung entwickelt und das Zusammenspiel von unterschiedlichen Anzeigeflächen evaluiert. Die in dieser Arbeit durchgeführten Studien zeigen, dass der räumlich verteilte Fahrerinteraktionsraum die Bedienung des Fahrerinformationssystems für den Nutzer sicherer und attraktiver gestaltet

    Gaze Analysis methods for Learning Analytics

    Get PDF
    Eye-tracking had been shown to be predictive of expertise, task-based success, task-difficulty, and the strategies involved in problem solving, both in the individual and collaborative settings. In learning analytics, eye-tracking could be used as a powerful tool, not only to differentiate between the levels of expertise and task-outcome, but also to give constructive feedback to the users. In this dissertation, we show how eye-tracking could prove to be useful to understand the cognitive processes underlying dyadic interaction; in two contexts: pair program comprehension and learning with a Massive Open Online Course (MOOC). The first context is a typical collaborative work scenario, while the second is a special case of dyadic interaction namely the teacher-student pair. We also demonstrate, using one example experiment, how the findings about the relation between the learning outcome in MOOCs and the students' gaze patterns can be leveraged to design a feedback tool to improve the students' learning outcome and their attention levels while learning through a MOOC video. We also show that the gaze can also be used as a cue to resolve the teachers' verbal references in a MOOC video; and this way we can improve the learning experiences of the MOOC students. This thesis is comprised of five studies. The first study, contextualised within a collaborative setting, where the collaborating partners tried to understand the given program. In this study, we examine the relationship among the gaze patterns of the partners, their dialogues and the levels of understanding that the pair attained at the end of the task. The next four studies are contextualised within the MOOC environment. The first MOOC study explores the relationship between the students' performance and their attention level. The second MOOC study, which is a dual eye-tracking study, examines the relation between the individual and collaborative gaze patterns and their relation with the learning outcome. This study also explores the idea of activating students' knowledge, prior to receiving any learning material, and the effect of different ways to activate the students' knowledge on their gaze patterns and their learning outcomes. The third MOOC study, during which we designed a feedback tool based on the results of the first two MOOC studies, demonstrates that the variables we proposed to measure the students' attention, could be leveraged upon to provide feedback about their gaze patterns. We also show that using this feedback tool improves the students' learning outcome and their attention levels. The fourth and final MOOC study shows that augmenting a MOOC video with the teacher's gaze information helps improving the learning experiences of the students. When the teacher's gaze is displayed the perceived difficulty of the content decreases significantly as compared to the moments when there is no gaze augmentation. In a nutshell, through this dissertation, we show that the gaze can be used to understand, support and improve the dyadic interaction, in order to increase the chances of achieving a higher level of task-based success

    Gaze-based infotainment agents

    No full text
    corecore