27 research outputs found

    Pengenalan Ekspresi Wajah Berbasis Filter Gabor Dan Backpropagation Neural Network

    Full text link
    Sebuah algoritma berbasis filter Gabor dan Backpropagation (BPP) Neural Network diusulkan untuk pengenalan ekspresi wajah. Pertama, ciri emosi ekspresi wajah dinyatakan dengan filter Gabor. Kemudian ciri digunakan untuk melatih jaringan neural dengan algoritma pelatihan Backpropagation. Terakhir, ekspresi wajah diklasifikasi dengan jaringan neural. Menggunakan algoritma tersebut, diperoleh hasil pengenalan yang tinggi.Kata Kunci—Pengenalan ekspresi wajah, filter Gabor, Jaringan Backpropagation

    Pengenalan Ekspresi Wajah berbasis Filter Gabor dan Backpropagation Neural Network

    Get PDF
    Sebuah algoritma berbasis filter Gabor dan Backpropagation (BPP) Neural Network diusulkan untuk pengenalan ekspresi wajah. Pertama, ciri emosi ekspresi wajah dinyatakan dengan filter Gabor. Kemudian ciri digunakan untuk melatih jaringan neural dengan algoritma pelatihan Backpropagation. Terakhir, ekspresi wajah diklasifikasi dengan jaringan neural. Menggunakan algoritma tersebut, diperoleh hasil pengenalan yang tinggi. Kata Kunci—Pengenalan ekspresi wajah, filter Gabor, Jaringan Backpropagation

    การรู้จำภาพใบหน้าโดยใช้หลายคุณลักษณะด้วยการประมวลผลกราฟแสดงค่าความถี่ของระดับความเข้ม

    Get PDF
    Identification and authentication by face recognition mainly use global face features. However, the recognition performance is not good. This research aims to develop a method to increase the efficiency of recognition using global-face feature and local-face feature with 4 parts: the left-eye, right-eye, nose and mouth. This method is based on geometrical techniques used to find location of eyes, nose and mouth from the frontal face image. We used 110 face images for learning and testing. The histogram processed face recognition technique is used. The results show that the recognition percentage is 89.09%

    Recognition of Facial Expressions using Local Mean Binary Pattern

    Get PDF
    In this paper, we propose a novel appearance based local feature extraction technique called Local Mean Binary Pattern (LMBP), which efficiently encodes the local texture and global shape of the face. LMBP code is produced by weighting the thresholded neighbor intensity values with respect to mean of 3 x 3 patch. LMBP produces highly discriminative code compared to other state of the art methods. The micro pattern is derived using the mean of the patch, and hence it is robust against illumination and noise variations. An image is divided into M x N regions and feature descriptor is derived by concatenating LMBP distribution of each region. We also propose a novel template matching strategy called Histogram Normalized Absolute Difference (HNAD) for comparing LMBP histograms. Rigorous experiments prove the effectiveness and robustness of LMBP operator. Experiments also prove the superiority of HNAD measure over well-known template matching methods such as L2 norm and Chi-Square measure. We also investigated LMBP for facial expression recognition low resolution. The performance of the proposed approach is tested on well-known datasets CK, JAFFE, and TFEID

    A dynamic texture based approach to recognition of facial actions and their temporal models

    Get PDF
    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set
    corecore