3 research outputs found

    Smart classroom monitoring using novel real-time facial expression recognition system

    Get PDF
    Featured Application: The proposed automatic emotion recognition system has been deployed in the classroom environment (education) but it can be used anywhere to monitor the emotions of humans, i.e., health, banking, industries, social welfare etc. Abstract: Emotions play a vital role in education. Technological advancement in computer vision using deep learning models has improved automatic emotion recognition. In this study, a real-time automatic emotion recognition system is developed incorporating novel salient facial features for classroom assessment using a deep learning model. The proposed novel facial features for each emotion are initially detected using HOG for face recognition, and automatic emotion recognition is then performed by training a convolutional neural network (CNN) that takes real-time input from a camera deployed in the classroom. The proposed emotion recognition system will analyze the facial expressions of each student during learning. The selected emotional states are happiness, sadness, and fear along with the cognitive–emotional states of satisfaction, dissatisfaction, and concentration. The selected emotional states are tested against selected variables gender, department, lecture time, seating positions, and the difficulty of a subject. The proposed system contributes to improve classroom learning.Web of Science1223art. no. 1213

    Extraction of informative regions of a face for facial expression recognition

    No full text
    The aim of facial expression recognition (FER) algorithms is to extract discriminative features of a face. However, discriminative features for FER can only be obtained from the informative regions of a face. Also, each of the facial subregions have different impacts on different facial expressions. Local binary pattern (LBP) based FER techniques extract texture features from all the regions of a face, and subsequently the features are stacked sequentially. This process generates the correlated features among different expressions, and hence affects the accuracy. This research moves toward addressing these issues. The authors' approach entails extracting discriminative features from the informative regions of a face. In this view, they propose an informative region extraction model, which models the importance of facial regions based on the projection of the expressive face images onto the neural face images. However, in practical scenarios, neutral images may not be available, and therefore the authors propose to estimate a common reference image using Procrustes analysis. Subsequently, weighted‐projection‐based LBP feature is derived from the informative regions of the face and their associated weights. This feature extraction method reduces miss‐classification among different classes of expressions. Experimental results on standard datasets show the efficacy of the proposed method
    corecore