12 research outputs found

    Fuzzy Model For Human Face Expression Recognition

    Get PDF
    Facial expression recognition plays a vital and effective role within the interaction between man and computer. In this project, brand new system supported the mathematical logic is projected for this purpose. Fuzzy is one helpful approach for fuzzy classification, which might verify the intrinsic division in an exceedingly set of untagged knowledge and notice representatives for undiversified teams. This method acknowledges seven basic facial expressions particularly concern, surprise, happy, sad, disgust, Neutral and anger. For description of detail face facial features, Face Action writing (FACS) was style. First, we tend to gift a unique methodology for facial region extraction from static image. For determination of face effective areas is employed from integral projection curves. This methodology has high ability in intelligent choice of areas in facial features recognition system. Extracted face expression fed to fuzzy rule based mostly system for facial features recognition. Results of tests indicate that the projected theme for facial features recognition is powerful, with smart accuracy and generating superior results as compared to different approaches. DOI: 10.17762/ijritcc2321-8169.15052

    Reconocimientos de expresiones faciales prototipo usando ica.

    Get PDF
    En este documento se plantea una metodología con el fin de reconocer expresiones faciales prototipo, es decir aquellas asociadas a emociones universales. Esta metodología está compuesta por tres etapas: segmentación del rostro utilizando filtros Haar y clasificadores en cascada, extracción de características basada en el análisis de componentes independientes (ICA) y clasificación de las expresiones faciales utilizando el clasificador del vecino más cercano (KNN). Particularmente se reconocerán cuatro emociones: tristeza, alegría, miedo y enojo más rostros neutrales. La validación de la metodología se realizó sobre secuencias de imágenes de la base de datos FEEDTUM, alcanzando un desempeño promedio de 98.72% de exactitud para el reconocimiento de cinco clases

    Toward an affect-sensitive multimodal human-computer interaction

    No full text
    The ability to recognize affective states of a person... This paper argues that next-generation human-computer interaction (HCI) designs need to include the essence of emotional intelligence -- the ability to recognize a user's affective states -- in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI -- an automatic personalized analyzer of a user's nonverbal affective feedback

    Facial expression imitation for human robot interaction

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models

    No full text
    Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications

    Intensity based methodologies for facial expression recognition.

    Get PDF
    by Hok Chun Lo.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 136-143).Abstracts in English and Chinese.LIST OF FIGURES --- p.viiiLIST OF TABLES --- p.xChapter 1. --- INTRODUCTION --- p.1Chapter 2. --- PREVIOUS WORK ON FACIAL EXPRESSION RECOGNITION --- p.9Chapter 2.1. --- Active Deformable Contour --- p.9Chapter 2.2. --- Facial Feature Points and B-spline Curve --- p.10Chapter 2.3. --- Optical Flow Approach --- p.11Chapter 2.4. --- Facial Action Coding System --- p.12Chapter 2.5. --- Neural Network --- p.13Chapter 3. --- EIGEN-ANALYSIS BASED METHOD FOR FACIAL EXPRESSION RECOGNITION --- p.15Chapter 3.1. --- Related Topics on Eigen-Analysis Based Method --- p.15Chapter 3.1.1. --- Terminologies --- p.15Chapter 3.1.2. --- Principal Component Analysis --- p.17Chapter 3.1.3. --- Significance of Principal Component Analysis --- p.18Chapter 3.1.4. --- Graphical Presentation of the Idea of Principal Component Analysis --- p.20Chapter 3.2. --- EigenFace Method for Face Recognition --- p.21Chapter 3.3. --- Eigen-Analysis Based Method for Facial Expression Recognition --- p.23Chapter 3.3.1. --- Person-Dependent Database --- p.23Chapter 3.3.2. --- Direct Adoption of EigenFace Method --- p.24Chapter 3.3.3. --- Multiple Subspaces Method --- p.27Chapter 3.4. --- Detail Description on Our Approaches --- p.29Chapter 3.4.1. --- Database Formation --- p.29Chapter a. --- Conversion of Image to Column Vector --- p.29Chapter b. --- "Preprocess: Scale Regulation, Orientation Regulation and Cropping." --- p.30Chapter c. --- Scale Regulation --- p.31Chapter d. --- Orientation Regulation --- p.32Chapter e. --- Cropping of images --- p.33Chapter f. --- Calculation of Expression Subspace for Direct Adoption Method --- p.35Chapter g. --- Calculation of Expression Subspace for Multiple Subspaces Method. --- p.38Chapter 3.4.2. --- Recognition Process for Direct Adoption Method --- p.38Chapter 3.4.3. --- Recognition Process for Multiple Subspaces Method --- p.39Chapter a. --- Intensity Normalization Algorithm --- p.39Chapter b. --- Matching --- p.44Chapter 3.5. --- Experimental Result and Analysis --- p.45Chapter 4. --- DEFORMABLE TEMPLATE MATCHING SCHEME FOR FACIAL EXPRESSION RECOGNITION --- p.53Chapter 4.1. --- Background Knowledge --- p.53Chapter 4.1.1. --- Camera Model --- p.53Chapter a. --- Pinhole Camera Model and Perspective Projection --- p.54Chapter b. --- Orthographic Camera Model --- p.56Chapter c. --- Affine Camera Model --- p.57Chapter 4.1.2. --- View Synthesis --- p.58Chapter a. --- Technique Issue of View Synthesis --- p.59Chapter 4.2. --- View Synthesis Technique for Facial Expression Recognition --- p.68Chapter 4.2.1. --- From View Synthesis Technique to Template Deformation --- p.69Chapter 4.3. --- Database Formation --- p.71Chapter 4.3.1. --- Person-Dependent Database --- p.72Chapter 4.3.2. --- Model Images Acquisition --- p.72Chapter 4.3.3. --- Templates' Structure and Formation Process --- p.73Chapter 4.3.4. --- Selection of Warping Points and Template Anchor Points --- p.77Chapter a. --- Selection of Warping Points --- p.78Chapter b. --- Selection of Template Anchor Points --- p.80Chapter 4.4. --- Recognition Process --- p.81Chapter 4.4.1. --- Solving Warping Equation --- p.83Chapter 4.4.2. --- Template Deformation --- p.83Chapter 4.4.3. --- Template from Input Images --- p.86Chapter 4.4.4. --- Matching --- p.87Chapter 4.5. --- Implementation of Automation System --- p.88Chapter 4.5.1. --- Kalman Filter --- p.89Chapter 4.5.2. --- Using Kalman Filter for Trakcing in Our System --- p.89Chapter 4.5.3. --- Limitation --- p.92Chapter 4.6. --- Experimental Result and Analysis --- p.93Chapter 5. --- CONCLUSION AND FUTURE WORK --- p.97APPENDIX --- p.100Chapter I. --- Image Sample 1 --- p.100Chapter II. --- Image Sample 2 --- p.109Chapter III. --- Image Sample 3 --- p.119Chapter IV. --- Image Sample 4 --- p.135BIBLIOGRAPHY --- p.13
    corecore