2 research outputs found

    Analysis on techniques used to recognize and identifying the Human emotions

    Get PDF
    Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress of research in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify the proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on the various recognition techniques used to identify the complexity in recognizing the facial expression is presented. This work will also help researchers and scholars to ease out the problem in choosing the techniques used in the identification of the facial expression domain

    Face recognition based on multiple region features

    No full text
    Abstract. For face recognition, face feature selection is an important step. Better features should result in better performance. This paper describes a robust face recognition algorithm using multiple face region features selected by the AdaBoost algorithm. In conventional face recognition algorithms, the face region is dealt with as a whole. In this paper we show that dividing a face into a number of sub-regions can improve face recognition performance. We use conventional AdaBoost with a weak learner based on multiple region orthogonal component principal component analysis (OCPCA) features. The regions are selected areas of the face (such as eye, mouth, nose etc.). The AdaBoost algorithm generates a strong classifier from the combination of these region features. Experiments have been done to evaluate the performance on the CMU Pose Illumination Expression (PIE) databases. Performance comparisons between single region OCPCA, our multiple region OCPCA, and published results from Visionics ’ FaceIt are given. Significant performance improvement is demonstrated using multiple facial region OCPCA features
    corecore