13 research outputs found

    Kernelized dense layers for facial expression recognition

    Full text link
    Fully connected layer is an essential component of Convolutional Neural Networks (CNNs), which demonstrates its efficiency in computer vision tasks. The CNN process usually starts with convolution and pooling layers that first break down the input images into features, and then analyze them independently. The result of this process feeds into a fully connected neural network structure which drives the final classification decision. In this paper, we propose a Kernelized Dense Layer (KDL) which captures higher order feature interactions instead of conventional linear relations. We apply this method to Facial Expression Recognition (FER) and evaluate its performance on RAF, FER2013 and ExpW datasets. The experimental results demonstrate the benefits of such layer and show that our model achieves competitive results with respect to the state-of-the-art approaches

    AUTO3D: Novel view synthesis through unsupervisely learned variational viewpoint and global 3D representation

    Full text link
    This paper targets on learning-based novel view synthesis from a single or limited 2D images without the pose supervision. In the viewer-centered coordinates, we construct an end-to-end trainable conditional variational framework to disentangle the unsupervisely learned relative-pose/rotation and implicit global 3D representation (shape, texture and the origin of viewer-centered coordinates, etc.). The global appearance of the 3D object is given by several appearance-describing images taken from any number of viewpoints. Our spatial correlation module extracts a global 3D representation from the appearance-describing images in a permutation invariant manner. Our system can achieve implicitly 3D understanding without explicitly 3D reconstruction. With an unsupervisely learned viewer-centered relative-pose/rotation code, the decoder can hallucinate the novel view continuously by sampling the relative-pose in a prior distribution. In various applications, we demonstrate that our model can achieve comparable or even better results than pose/3D model-supervised learning-based novel view synthesis (NVS) methods with any number of input views.Comment: ECCV 202

    Smart classroom monitoring using novel real-time facial expression recognition system

    Get PDF
    Featured Application: The proposed automatic emotion recognition system has been deployed in the classroom environment (education) but it can be used anywhere to monitor the emotions of humans, i.e., health, banking, industries, social welfare etc. Abstract: Emotions play a vital role in education. Technological advancement in computer vision using deep learning models has improved automatic emotion recognition. In this study, a real-time automatic emotion recognition system is developed incorporating novel salient facial features for classroom assessment using a deep learning model. The proposed novel facial features for each emotion are initially detected using HOG for face recognition, and automatic emotion recognition is then performed by training a convolutional neural network (CNN) that takes real-time input from a camera deployed in the classroom. The proposed emotion recognition system will analyze the facial expressions of each student during learning. The selected emotional states are happiness, sadness, and fear along with the cognitive–emotional states of satisfaction, dissatisfaction, and concentration. The selected emotional states are tested against selected variables gender, department, lecture time, seating positions, and the difficulty of a subject. The proposed system contributes to improve classroom learning.Web of Science1223art. no. 1213
    corecore