13 research outputs found

    Review of automatic recognition methods of human emotional state using image

    Get PDF
    Рассматривается задача распознавания эмоционального состояния человека по изображению. Приведен обзор основных способов описания человеческих эмоций: разделение на конечное число классов и использование векторного описания. Представлены существующие разработки в области распознавания эмоций по изображению, а также приведен общий алгоритм работы подобных систем. Основными этапами решения задачи распознавании эмоций являются поиск лица на изображении и классификация эмоции. Информационная технология распознавания эмоций представлена в графической нотации. Описаны принципы работы алгоритма Виолы -Джонса, который используется для определения лица человека на изображении Представлены подходы, которые применяются для решения задачи классификации: алгоритм Виолы-Джонса, метод опорных точек, различные архитектуры нейронных сетей, которые предназначены для классификации изображений. Проанализированы преимущества и недостатки метода опорных точек, базирующегося на системе кодирования лицевых движений, а также способ применения алгоритма Виолы-Джонса для классификации эмоций. Рассмотрен метод распознавания эмоционального состояния человека на основании визуальной информации с применением сверточных нейронных сетей. Описаны принципы действия сверточных, субдискретизирующих и полносвязных слоев нейронной сети. На основе анализа опубликованных работ приведены результаты точности распознавания в различных условиях. Также представлены работы, в которых для анализа эмоционального состояния применяется комбинация сверточных и рекуррентных нейронных сетей, где кроме визуальной информации используется дополнительный источник – аудиопоток, что позволяет более эффективно классифицировать эмоции в видеопотоке. Представлены наиболее популярные обучающие выборки данных для решения рассмотренной задачи.The problem of recognizing a person’s emotional state from an image is considered. A review of the main ways of describing human emotions is given: the division into a finite number of classes and the use of vector format. Existing developments in the field of emotions recognition by image are presented, as well as a general algorithm for the operation of such systems is provided. The main steps in solving the problem of recognizing emotions are the search for a face in the image and the emotions classification. Information technology for the recognition of emotions is presented in the graphic notation. The principles of the Viola-Jones algorithm, which is used to determine the person’s face in the image, are described. The approaches that are used to solve the classification problem are described: the Viola-Jones algorithm, reference points method, various neural network architectures that are used to classify images. The advantages and disadvantages of the reference point method based on the facial action coding system are analyzed, as well as the way the Viola-Jones algorithm is used to classify emotions. A method for recognizing a person’s emotional state based on visual information using convolutional neural networks is considered. The principles of the action of convolutional, sub-sampling and fully connected layers of the neural network are described. Based on the analysis of published works, the results of recognition accuracy under various conditions are presented. Also presented works in which combination of convolutional and recurrent neural networks is used to analyze the emotional state, where in addition to visual information used an audio stream, which gives more efficient classification of emotions in a video stream. The most popular training data sets for solving the considered problem are presented

    Movements Recognition in the Human Body Based on Deep Learning Strategies

    Get PDF
    These days, the study of human body movements for the purpose of emotion identification is an absolutely necessary component of social communication. Several different contexts call for the implementation of non-verbal communication strategies such as gestures, eye movements, facial expressions, and body language. Among them, emotion detection based on body movements. It can also identify the emotions of a person even if they are too far away from the camera. Other studies have shown that body language can express emotional states more effectively than words can. In this research study, an emotional state is determined by the human motion of the entire body. The architecture of a deep convolution neural network is used, and multiple parameter settings are considered. Both the University of York's emotion dataset, which includes 15 different kinds of emotions, and dataset of GEMEP corpus, which includes five emotions, can be used to assess the proposed system. The results of the experiments demonstrated that the proposed system has a higher degree of recognition accuracy

    InSocialNet: Interactive visual analytics for role-event videos

    Get PDF
    Role–event videos are rich in information but challenging to be understood at the story level. The social roles and behavior patterns of characters largely depend on the interactions among characters and the background events. Understanding them requires analysis of the video contents for a long duration, which is beyond the ability of current algorithms designed for analyzing short-time dynamics. In this paper, we propose InSocialNet, an interactive video analytics tool for analyzing the contents of role–event videos. It automatically and dynamically constructs social networks from role–event videos making use of face and expression recognition, and provides a visual interface for interactive analysis of video contents. Together with social network analysis at the back end, InSocialNet supports users to investigate characters, their relationships, social roles, factions, and events in the input video. We conduct case studies to demonstrate the effectiveness of InSocialNet in assisting the harvest of rich information from role–event videos. We believe the current prototype implementation can be extended to applications beyond movie analysis, e.g., social psychology experiments to help understand crowd social behaviors

    An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games

    Get PDF
    Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor
    corecore