318 research outputs found
Continuous Stress Monitoring under Varied Demands Using Unobtrusive Devices
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.This research aims to identify a feasible model to predict a learnerâs stress in an online learning platform. It is desirable to produce a cost-effective, unobtrusive and objective method to measure a learnerâs emotions. The few signals produced by mouse and keyboard could enable such solution to measure real world individualâs affective states. It is also important to ensure that the measurement can be applied regardless the type of task carried out by the user. This preliminary research proposes a stress classification method using mouse and keystroke dynamics to classify the stress levels of 190 university students when performing three different e-learning activities. The results show that the stress measurement based on mouse and keystroke dynamics is consistent with the stress measurement according to the changes of duration spent between two consecutive questions. The feedforward back-propagation neural network achieves the best performance in the classification
Virtual Reality Games for Motor Rehabilitation
This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any productâs acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
Automatic Sensor-free Affect Detection: A Systematic Literature Review
Emotions and other affective states play a pivotal role in cognition and,
consequently, the learning process. It is well-established that computer-based
learning environments (CBLEs) that can detect and adapt to students' affective
states can enhance learning outcomes. However, practical constraints often pose
challenges to the deployment of sensor-based affect detection in CBLEs,
particularly for large-scale or long-term applications. As a result,
sensor-free affect detection, which exclusively relies on logs of students'
interactions with CBLEs, emerges as a compelling alternative. This paper
provides a comprehensive literature review on sensor-free affect detection. It
delves into the most frequently identified affective states, the methodologies
and techniques employed for sensor development, the defining attributes of
CBLEs and data samples, as well as key research trends. Despite the field's
evident maturity, demonstrated by the consistent performance of the models and
the application of advanced machine learning techniques, there is ample scope
for future research. Potential areas for further exploration include enhancing
the performance of sensor-free detection models, amassing more samples of
underrepresented emotions, and identifying additional emotions. There is also a
need to refine model development practices and methods. This could involve
comparing the accuracy of various data collection techniques, determining the
optimal granularity of duration, establishing a shared database of action logs
and emotion labels, and making the source code of these models publicly
accessible. Future research should also prioritize the integration of models
into CBLEs for real-time detection, the provision of meaningful interventions
based on detected emotions, and a deeper understanding of the impact of
emotions on learning
Coding vs presenting: a multicultural study on emotions
Purpose: The purpose of this paper is to explore and compare emotions perceived while coding and presenting for software students, comparing three different countries and performing also a gender analysis. Design/methodology/approach: Empirical data are gathered by means of the discrete emotions questionnaire, which was distributed to a group of students (n = 174) in three different countries: Norway, Spain and Turkey. All emotions are self-assessed by means of a Likert scale. Findings: The results show that both tasks are emotionally different for the subjects of all countries: presentation is described as a task that produces mainly fear and anxiety; whereas coding tasks produce anger and rage, but also happiness and satisfaction. With regards to gender differences, men feel less scared in presentation tasks, whereas women report more desire in coding activities. It is concluded that it is important to be aware and take into account the different emotions perceived by students in their activities. Moreover, it is also important to note the different intensities in these emotions present in different cultures and genders. Originality/value: This study is among the few to study emotions perceived in software work by means of a multicultural approach using quantitative research methods. The research results enrich computing literacy theory in human factors
NASA/ASEE Summer Faculty Fellowship Program, 1990, Volume 1
The 1990 Johnson Space Center (JSC) NASA/American Society for Engineering Education (ASEE) Summer Faculty Fellowship Program was conducted by the University of Houston-University Park and JSC. A compilation of the final reports on the research projects are presented. The topics covered include: the Space Station; the Space Shuttle; exobiology; cell biology; culture techniques; control systems design; laser induced fluorescence; spacecraft reliability analysis; reduced gravity; biotechnology; microgravity applications; regenerative life support systems; imaging techniques; cardiovascular system; physiological effects; extravehicular mobility units; mathematical models; bioreactors; computerized simulation; microgravity simulation; and dynamic structural analysis
Detecting students-at-risk in computer programming classes with learning analytics from studentsâ digital footprints
Different sources of data about students, ranging from static demographics to dynamic behavior logs, can be harnessed from a variety sources at Higher Education Institutions. Combining these assembles a rich digital footprint for students, which can enable institutions to better understand student behaviour and to better prepare for guiding students towards reaching their academic potential. This paper presents a new research methodology to automatically detect students ``at-risk'' of failing an assignment in computer programming modules (courses) and to simultaneously support adaptive feedback. By leveraging historical student data, we built predictive models using students' offline (static) information including student characteristics and demographics, and online (dynamic) resources using programming and behaviour activity logs. Predictions are generated weekly during semester. Overall, the predictive and personalised feedback helped to reduce the gap between the lower and higher-performing students. Furthermore, students praised the prediction and the personalised feedback, conveying strong recommendations for future students to use the system. We also found that students who followed their personalised guidance and recommendations performed better in examinations
MODEL-BASED ASSESSMENT OF ADAPTIVE AUTOMATIONâS UNINTENDED CONSEQUENCES
Recent technological advances require development of human-centered principles for their inclusion into complex systems. While such programs incorporate revolutionary hardware and software advances, there is a necessary space for including human operator design considerations, such as cognitive workload. As technologies mature, it is essential to understand the impacts that these emerging systems will have on cognitive workload. Adaptive automation is a solution that seeks to manage cognitive workload at optimal levels. Human performance modeling shows potential for modeling the effects of adaptive automation on cognitive workload. However, the introduction of adaptive automation into a system can also present unintended negative consequences to an operator. This dissertation investigated potential negative unintended consequences of adaptive automation through the development of human performance models of a multi-tasking simulation. One hundred twenty participants were enrolled in three human-in-the-loop experimental studies (forty participants each) that collected objective and subjective surrogate measures of cognitive workload to validate the models. Results from this research indicate that there are residual increases in operator workload after transitions in system states between manual and automatic control of a task that need to be included in human performance models and in system design considerations.Approved for public release. Distribution is unlimited.Lieutenant Colonel, United States ArmyCommanding Officer, U.S. Army Combat Capabilities Development Command, Aviation and Missile Center Agency, Redstone Arsenal, Alabama 35898-500
Automatically Detecting Confusion and Conflict During Collaborative Learning Using Linguistic, Prosodic, and Facial Cues
During collaborative learning, confusion and conflict emerge naturally.
However, persistent confusion or conflict have the potential to generate
frustration and significantly impede learners' performance. Early automatic
detection of confusion and conflict would allow us to support early
interventions which can in turn improve students' experience with and outcomes
from collaborative learning. Despite the extensive studies modeling confusion
during solo learning, there is a need for further work in collaborative
learning. This paper presents a multimodal machine-learning framework that
automatically detects confusion and conflict during collaborative learning. We
used data from 38 elementary school learners who collaborated on a series of
programming tasks in classrooms. We trained deep multimodal learning models to
detect confusion and conflict using features that were automatically extracted
from learners' collaborative dialogues, including (1) language-derived features
including TF-IDF, lexical semantics, and sentiment, (2) audio-derived features
including acoustic-prosodic features, and (3) video-derived features including
eye gaze, head pose, and facial expressions. Our results show that multimodal
models that combine semantics, pitch, and facial expressions detected confusion
and conflict with the highest accuracy, outperforming all unimodal models. We
also found that prosodic cues are more predictive of conflict, and facial cues
are more predictive of confusion. This study contributes to the automated
modeling of collaborative learning processes and the development of real-time
adaptive support to enhance learners' collaborative learning experience in
classroom contexts.Comment: 27 pages, 7 figures, 7 table
- âŠ