2 research outputs found

    Internet of Things for Education: Facilitating Personalised Education from a Universityā€™s Perspective

    Get PDF
    Personalised education has been a developmental goal across all levels of the UK education sector for many years. In particular, the Higher Education sector has struggled the most due to a lack of personalisation, as student numbers in lecture theatres have grown significantly, occasionally exceeding three hundred. As a consequence, educators are constantly challenged to gather and understand individual student needs, let alone address them. At the same time, technology has advanced in the recent years, particularly in the areas of Internet of Things (IoT) and big data. IoT technology has emerged as a great means to collect data from lecture theatres and labs, while big data technologies enable the processing of these data. Consequently, IoT offers potential solutions to some of the key issues facing the future of personalised education. In this paper, an IoT system is being proposed, which would enable the personalisation of education for large groups of students in lecture theatres and labs. The proposal is derived from a case study based on work which has taken place in a mid-sized UK university

    Multimodal emotion recognition based on the fusion of vision, EEG, ECG, and EMG signals

    Get PDF
    This paper presents a novel approach for emotion recognition (ER) based on Electroencephalogram (EEG), Electromyogram (EMG), Electrocardiogram (ECG), and computer vision. The proposed system includes two different models for physiological signals and facial expressions deployed in a real-time embedded system. A custom dataset for EEG, ECG, EMG, and facial expression was collected from 10 participants using an Affective Video Response System. Time, frequency, and wavelet domain-specific features were extracted and optimized, based on their Visualizations from Exploratory Data Analysis (EDA) and Principal Component Analysis (PCA). Local Binary Patterns (LBP), Local Ternary Patterns (LTP), Histogram of Oriented Gradients (HOG), and Gabor descriptors were used for differentiating facial emotions. Classification models, namely decision tree, random forest, and optimized variants thereof, were trained using these features. The optimized Random Forest model achieved an accuracy of 84%, while the optimized Decision Tree achieved 76% for the physiological signal-based model. The facial emotion recognition (FER) model attained an accuracy of 84.6%, 74.3%, 67%, and 64.5% using K-Nearest Neighbors (KNN), Random Forest, Decision Tree, and XGBoost, respectively. Performance metrics, including Area Under Curve (AUC), F1 score, and Receiver Operating Characteristic Curve (ROC), were computed to evaluate the models. The outcome of both results, i.e., the fusion of bio-signals and facial emotion analysis, is given to a voting classifier to get the final emotion. A comprehensive report is generated using the Generative Pretrained Transformer (GPT) language model based on the resultant emotion, achieving an accuracy of 87.5%. The model was implemented and deployed on a Jetson Nano. The results show its relevance to ER. It has applications in enhancing prosthetic systems and other medical fields such as psychological therapy, rehabilitation, assisting individuals with neurological disorders, mental health monitoring, and biometric security
    corecore