How Facial Features Convey Attention in Stationary Environments

Abstract

Awareness detection technologies have been gaining traction in a variety of enterprises; most often used for driver fatigue detection, recent research has shifted towards using computer vision technologies to analyze user attention in environments such as online classrooms. This paper aims to extend previous research on distraction detection by analyzing which visual features contribute most to predicting awareness and fatigue. We utilized the open-source facial analysis toolkit OpenFace in order to analyze visual data of subjects at varying levels of attentiveness. Then, using a Support-Vector Machine (SVM) we created several prediction models for user attention and identified the Histogram of Oriented Gradients (HOG) and Action Units to be the greatest predictors of the features we tested. We also compared the performance of this SVM to deep learning approaches that utilize Convolutional and/or Recurrent neural networks (CNNs and CRNNs). Interestingly, CRNNs did not appear to perform significantly better than their CNN counterparts. While deep learning methods achieved greater prediction accuracy, SVMs utilized less resources and, using certain parameters, were able to approach the performance of deep learning methods

    Similar works