4,743 research outputs found

    Unobtrusive Assessment Of Student Engagement Levels In Online Classroom Environment Using Emotion Analysis

    Get PDF
    Measuring student engagement has emerged as a significant factor in the process of learning and a good indicator of the knowledge retention capacity of the student. As synchronous online classes have become more prevalent in recent years, gauging a student\u27s attention level is more critical in validating the progress of every student in an online classroom environment. This paper details the study on profiling the student attentiveness to different gradients of engagement level using multiple machine learning models. Results from the high accuracy model and the confidence score obtained from the cloud-based computer vision platform - Amazon Rekognition were then used to statistically validate any correlation between student attentiveness and emotions. This statistical analysis helps to identify the significant emotions that are essential in gauging various engagement levels. This study identified emotions like calm, happy, surprise, and fear are critical in gauging the student\u27s attention level. These findings help in the earlier detection of students with lower attention levels, consequently helping the instructors focus their support and guidance on the students in need, leading to a better online learning environment

    A review on data fusion in multimodal learning analytics and educational data mining

    Get PDF
    The new educational models such as smart learning environments use of digital and context-aware devices to facilitate the learning process. In this new educational scenario, a huge quantity of multimodal students' data from a variety of different sources can be captured, fused, and analyze. It offers to researchers and educators a unique opportunity of being able to discover new knowledge to better understand the learning process and to intervene if necessary. However, it is necessary to apply correctly data fusion approaches and techniques in order to combine various sources of multimodal learning analytics (MLA). These sources or modalities in MLA include audio, video, electrodermal activity data, eye-tracking, user logs, and click-stream data, but also learning artifacts and more natural human signals such as gestures, gaze, speech, or writing. This survey introduces data fusion in learning analytics (LA) and educational data mining (EDM) and how these data fusion techniques have been applied in smart learning. It shows the current state of the art by reviewing the main publications, the main type of fused educational data, and the data fusion approaches and techniques used in EDM/LA, as well as the main open problems, trends, and challenges in this specific research area

    Affect-driven Engagement Measurement from Videos

    Full text link
    In education and intervention programs, person's engagement has been identified as a major factor in successful program completion. Automatic measurement of person's engagement provides useful information for instructors to meet program objectives and individualize program delivery. In this paper, we present a novel approach for video-based engagement measurement in virtual learning programs. We propose to use affect states, continuous values of valence and arousal extracted from consecutive video frames, along with a new latent affective feature vector and behavioral features for engagement measurement. Deep learning-based temporal, and traditional machine-learning-based non-temporal models are trained and validated on frame-level, and video-level features, respectively. In addition to the conventional centralized learning, we also implement the proposed method in a decentralized federated learning setting and study the effect of model personalization in engagement measurement. We evaluated the performance of the proposed method on the only two publicly available video engagement measurement datasets, DAiSEE and EmotiW, containing videos of students in online learning programs. Our experiments show a state-of-the-art engagement level classification accuracy of 63.3% and correctly classifying disengagement videos in the DAiSEE dataset and a regression mean squared error of 0.0673 on the EmotiW dataset. Our ablation study shows the effectiveness of incorporating affect states in engagement measurement. We interpret the findings from the experimental results based on psychology concepts in the field of engagement.Comment: 13 pages, 8 figures, 7 table

    Biometric features modeling to measure students engagement.

    Get PDF
    The ability to measure students’ engagement in an educational setting may improve student retention and academic success, revealing which students are disinterested, or which segments of a lesson are causing difficulties. This ability will facilitate timely intervention in both the learning and the teaching process in a variety of classroom settings. In this dissertation, an automatic students engagement measure is proposed through investigating three main engagement components of the engagement: the behavioural engagement, the emotional engagement and the cognitive engagement. The main goal of the proposed technology is to provide the instructors with a tool that could help them estimating both the average class engagement level and the individuals engagement levels while they give the lecture in real-time. Such system could help the instructors to take actions to improve students\u27 engagement. Also, it can be used by the instructor to tailor the presentation of material in class, identify course material that engages and disengages with students, and identify students who are engaged or disengaged and at risk of failure. A biometric sensor network (BSN) is designed to capture data consist of individuals facial capture cameras, wall-mounted cameras and high performance computing machine to capture students head pose, eye gaze, body pose, body movements, and facial expressions. These low level features will be used to train a machine-learning model to estimate the behavioural and emotional engagements in either e-learning or in-class environment. A set of experiments is conducted to compare the proposed technology with the state-of-the-art frameworks in terms of performance. The proposed framework shows better accuracy in estimating both behavioral and emotional engagement. Also, it offers superior flexibility to work in any educational environment. Further, this approach allows quantitative comparison of teaching methods, such as lecture, flipped classrooms, classroom response systems, etc. such that an objective metric can be used for teaching evaluation with immediate closed-loop feedback to the instructor

    Facial Emotion Recognition with Sparse Coding Descriptor

    Get PDF
    With the Corona Virus Disease 2019 (COVID-19) global pandemic ravaging the world, all sectors of life were affected including education. This led to many schools taking distance learning through the use of computer as a safer option. Facial emotion means a lot to teacher’s assessment of his performance and relation to his students. Researchers has been working on improving the face monitoring and human machine interface. In this paper we presented different types of face recognition methods which include: Principal component analysis (PCA); Speeded Up Robust Features (SURF); Local binary pattern (LBP); Gray-Level Co-occurrence Matrix (GLCM) and also the group sparse coding (GSC) and come up with the fusion of LBP, PCA, SURF GLCM with GSC. Linear Kernel Support Vector Machine (LSVM) Classifier out-performed Polynomial, RBF and Sigmoid kernels SVM in the emotion classification. Results obtained from experiments indicated that, the new fusion method is capable of differentiating different types of face emotions with higher accuracy compare with the state-of-the-art methods currently available

    A Speaker Diarization System for Studying Peer-Led Team Learning Groups

    Full text link
    Peer-led team learning (PLTL) is a model for teaching STEM courses where small student groups meet periodically to collaboratively discuss coursework. Automatic analysis of PLTL sessions would help education researchers to get insight into how learning outcomes are impacted by individual participation, group behavior, team dynamics, etc.. Towards this, speech and language technology can help, and speaker diarization technology will lay the foundation for analysis. In this study, a new corpus is established called CRSS-PLTL, that contains speech data from 5 PLTL teams over a semester (10 sessions per team with 5-to-8 participants in each team). In CRSS-PLTL, every participant wears a LENA device (portable audio recorder) that provides multiple audio recordings of the event. Our proposed solution is unsupervised and contains a new online speaker change detection algorithm, termed G 3 algorithm in conjunction with Hausdorff-distance based clustering to provide improved detection accuracy. Additionally, we also exploit cross channel information to refine our diarization hypothesis. The proposed system provides good improvements in diarization error rate (DER) over the baseline LIUM system. We also present higher level analysis such as the number of conversational turns taken in a session, and speaking-time duration (participation) for each speaker.Comment: 5 Pages, 2 Figures, 2 Tables, Proceedings of INTERSPEECH 2016, San Francisco, US

    Machine Learning Models for Educational Platforms

    Get PDF
    Scaling up education online and onlife is presenting numerous key challenges, such as hardly manageable classes, overwhelming content alternatives, and academic dishonesty while interacting remotely. However, thanks to the wider availability of learning-related data and increasingly higher performance computing, Artificial Intelligence has the potential to turn such challenges into an unparalleled opportunity. One of its sub-fields, namely Machine Learning, is enabling machines to receive data and learn for themselves, without being programmed with rules. Bringing this intelligent support to education at large scale has a number of advantages, such as avoiding manual error-prone tasks and reducing the chance that learners do any misconduct. Planning, collecting, developing, and predicting become essential steps to make it concrete into real-world education. This thesis deals with the design, implementation, and evaluation of Machine Learning models in the context of online educational platforms deployed at large scale. Constructing and assessing the performance of intelligent models is a crucial step towards increasing reliability and convenience of such an educational medium. The contributions result in large data sets and high-performing models that capitalize on Natural Language Processing, Human Behavior Mining, and Machine Perception. The model decisions aim to support stakeholders over the instructional pipeline, specifically on content categorization, content recommendation, learners’ identity verification, and learners’ sentiment analysis. Past research in this field often relied on statistical processes hardly applicable at large scale. Through our studies, we explore opportunities and challenges introduced by Machine Learning for the above goals, a relevant and timely topic in literature. Supported by extensive experiments, our work reveals a clear opportunity in combining human and machine sensing for researchers interested in online education. Our findings illustrate the feasibility of designing and assessing Machine Learning models for categorization, recommendation, authentication, and sentiment prediction in this research area. Our results provide guidelines on model motivation, data collection, model design, and analysis techniques concerning the above applicative scenarios. Researchers can use our findings to improve data collection on educational platforms, to reduce bias in data and models, to increase model effectiveness, and to increase the reliability of their models, among others. We expect that this thesis can support the adoption of Machine Learning models in educational platforms even more, strengthening the role of data as a precious asset. The thesis outputs are publicly available at https://www.mirkomarras.com

    Depression Estimation Using Audiovisual Features and Fisher Vector Encoding

    Get PDF
    International audienceWe investigate the use of two visual descriptors: Local Bi-nary Patterns-Three Orthogonal Planes(LBP-TOP) and Dense Trajectories for depression assessment on the AVEC 2014 challenge dataset. We encode the visual information gen-erated by the two descriptors using Fisher Vector encod-ing which has been shown to be one of the best performing methods to encode visual data for image classification. We also incorporate audio features in the final system to intro-duce multiple input modalities. The results produced using Linear Support Vector regression outperform the baseline method[16]
    • …
    corecore