5,284 research outputs found

    Learning Opportunities and Challenges of Sensor-enabled Intelligent Tutoring Systems on Mobile Platforms: Benchmarking the Reliability of Mobile Sensors to Track Human Physiological Signals and Behaviors to Enhance Tablet-Based Intelligent Tutoring Systems

    Get PDF
    Desktop-based intelligent tutoring systems have existed for many decades, but the advancement of mobile computing technologies has sparked interest in developing mobile intelligent tutoring systems (mITS). Personalized mITS are applicable to not only stand-alone and client-server systems but also cloud systems possibly leveraging big data. Device-based sensors enable even greater personalization through capture of physiological signals during periods of student study. However, personalizing mITS to individual students faces challenges. The Achilles heel of personalization is the feasibility and reliability of these sensors to accurately capture physiological signals and behavior measures. This research reviews feasibility and benchmarks reliability of basic mobile platform sensors in various student postures. The research software and methodology are generalizable to a range of platforms and sensors. Incorporating the tile-based puzzle game 2048 as a substitute for a knowledge domain also enables a broad spectrum of test populations. Baseline sensors include the on-board camera to detect eyes/faces and the Bluetooth Empatica E4 wristband to capture heart rate, electrodermal activity (EDA), and skin temperature. The test population involved 100 collegiate students randomly assigned to one of three different ergonomic positions in a classroom: sitting at a table, standing at a counter, or reclining on a sofa. Well received by the students, EDA proved to be more reliable than heart rate or face detection in the three different ergonomic positions. Additional insights are provided on advancing learning personalization through future sensor feasibility and reliability studies

    The Language of Learning in Family and Consumer Sciences: English Language Learners in Career Technical Education

    Get PDF
    Family and Consumer Sciences (FCS) content and English as a Second Language (ESL) strategies can be organically incorporated to create a successful education for an English Language Learner (ELL). The first objective of this research project is to discover how prepared Family and Consumer Sciences teachers feel to work with English Language Learners in the classroom. The second objective is to identify practical and effective methods and strategies that are useful for Family and Consumer Sciences teachers instructing English Language Learners. The rationale for this project is that by identifying the challenges faced by English Language Learners in education, teachers in this field can better address the needs of these students with proven methods. A three-part approach was taken to gather insight from all relevant stakeholders. This paper examines research gained from a forced-choice survey of Family and Consumer Sciences teachers across the country along with field observations of both high school and college English Language Learners in the Midwest. The purpose of the survey is to gather the perspective of the educator, specifically within Family and Consumer Sciences. The first level of field observation aims to study the struggle of the English Language Learner in the high school setting over a semester of classes. The second level of field observation is a one-day instruction of a lesson aimed towards college-age English Language Learners as a method of understanding the difficulties shared by the instructor and student. In the conclusion of this study, a lack of current research surrounding Family and Consumer Sciences and English as a Second Language was discovered, along with the need to address how teachers are supported in their endeavor to instruct English Language Learners with life skills, and the need to embrace students’ cultural diversity and use multiple teaching styles

    E-Learning

    Get PDF
    Technology development, mainly for telecommunications and computer systems, was a key factor for the interactivity and, thus, for the expansion of e-learning. This book is divided into two parts, presenting some proposals to deal with e-learning challenges, opening up a way of learning about and discussing new methodologies to increase the interaction level of classes and implementing technical tools for helping students to make better use of e-learning resources. In the first part, the reader may find chapters mentioning the required infrastructure for e-learning models and processes, organizational practices, suggestions, implementation of methods for assessing results, and case studies focused on pedagogical aspects that can be applied generically in different environments. The second part is related to tools that can be adopted by users such as graphical tools for engineering, mobile phone networks, and techniques to build robots, among others. Moreover, part two includes some chapters dedicated specifically to e-learning areas like engineering and architecture

    I Probe, Therefore I Am: Designing a Virtual Journalist with Human Emotions

    Get PDF
    By utilizing different communication channels, such as verbal language, gestures or facial expressions, virtually embodied interactive humans hold a unique potential to bridge the gap between human-computer interaction and actual interhuman communication. The use of virtual humans is consequently becoming increasingly popular in a wide range of areas where such a natural communication might be beneficial, including entertainment, education, mental health research and beyond. Behind this development lies a series of technological advances in a multitude of disciplines, most notably natural language processing, computer vision, and speech synthesis. In this paper we discuss a Virtual Human Journalist, a project employing a number of novel solutions from these disciplines with the goal to demonstrate their viability by producing a humanoid conversational agent capable of naturally eliciting and reacting to information from a human user. A set of qualitative and quantitative evaluation sessions demonstrated the technical feasibility of the system whilst uncovering a number of deficits in its capacity to engage users in a way that would be perceived as natural and emotionally engaging. We argue that naturalness should not always be seen as a desirable goal and suggest that deliberately suppressing the naturalness of virtual human interactions, such as by altering its personality cues, might in some cases yield more desirable results.Comment: eNTERFACE16 proceeding

    THE HELPING TEACHER/CRISIS TEACHER CONCEPT

    Get PDF

    Dynamic Facial Expression of Emotion Made Easy

    Full text link
    Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mechanism is based on Facial Action Coding. It is easy to implement, and code is available for download. To show the validity of the expressions generated with the mechanism we tested the recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise, disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and evil). Additionally we investigated the effect of VC distance (z-coordinate), the effect of the VC's face morphology (male vs. female), the effect of a lateral versus a frontal presentation of the expression, and the effect of intensity of the expression. Participants (n=19, Western and Asian subjects) rated the intensity of each expression for each condition (within subject setup) in a non forced choice manner. All of the basic emotions were uniquely perceived as such. Further, the blends and confusion details of basic emotions are compatible with findings in psychology

    iFocus: A Framework for Non-intrusive Assessment of Student Attention Level in Classrooms

    Get PDF
    The process of learning is not merely determined by what the instructor teaches, but also by how the student receives that information. An attentive student will naturally be more open to obtaining knowledge than a bored or frustrated student. In recent years, tools such as skin temperature measurements and body posture calculations have been developed for the purpose of determining a student\u27s affect, or emotional state of mind. However, measuring eye-gaze data is particularly noteworthy in that it can collect measurements non-intrusively, while also being relatively simple to set up and use. This paper details how data obtained from such an eye-tracker can be used to predict a student\u27s attention as a measure of affect over the course of a class. From this research, an accuracy of 77% was achieved using the Extreme Gradient Boosting technique of machine learning. The outcome indicates that eye-gaze can be indeed used as a basis for constructing a predictive model

    Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems

    Get PDF
    Learning involves a substantial amount of cognitive, social and emotional states. Therefore, recognizing and understanding these states in the context of learning is key in designing informed interventions and addressing the needs of the individual student to provide personalized education. In this paper, we explore the automatic detection of learner’s nonverbal behaviors involving hand-over-face gestures, head and eye movements and emotions via facial expressions during learning. The proposed computer vision-based behavior monitoring method uses a low-cost webcam and can easily be integrated with modern tutoring technologies. We investigate these behaviors in-depth over time in a classroom session of 40 minutes involving reading and problem-solving exercises. The exercises in the sessions are divided into three categories: an easy, medium and difficult topic within the context of undergraduate computer science. We found that there is a significant increase in head and eye movements as time progresses, as well as with the increase of difficulty level. We demonstrated that there is a considerable occurrence of hand-over-face gestures (on average 21.35%) during the 40 minutes session and is unexplored in the education domain. We propose a novel deep learning approach for automatic detection of hand-over-face gestures in images with a classification accuracy of 86.87%. There is a prominent increase in hand-over-face gestures when the difficulty level of the given exercise increases. The hand-over-face gestures occur more frequently during problem-solving (easy 23.79%, medium 19.84% and difficult 30.46%) exercises in comparison to reading (easy 16.20%, medium 20.06% and difficult 20.18%)
    • …
    corecore