10 research outputs found

    LITERATURE REVIEW: PENGENALAN WAJAH MENGGUNAKAN ALGORITMA CONVOLUTIONAL NEURAL NETWORK

    Get PDF
    Facial recognition to detect the identity of the gallon user's face in honesty in the school environment has many methods such as local, global, and hybrid approaches. The main problem of using the gallon of honesty is that the program uses the Self-service system, which is a self-service system, where the buyer serves itself unattended. The water charging activity is still found by users who are dishonest, such as taking water but not putting money into the place that has been provided, the thing that should be when the user fills the water then the user must also enter Money into the box provided. Because of the absence of supervision in this program of honesty then it is difficult to know who is dishonest in order to be able to do prevention for the dishonesty that has occurred when using the gallon of honesty program. Facial recognition using the Convolutional Neural Network (CNN) method to classify images. A literature review is used to analyse and focus on techniques in conducting facial recognition on the use of gallons of honesty. Keywords: facial recognition, convolutional neural network methods, a gallon of honest

    Facial expression recognition via a jointly-learned dual-branch network

    Get PDF
    Human emotion recognition depends on facial expressions, and essentially on the extraction of relevant features. Accurate feature extraction is generally difficult due to the influence of external interference factors and the mislabelling of some datasets, such as the Fer2013 dataset. Deep learning approaches permit an automatic and intelligent feature extraction based on the input database. But, in the case of poor database distribution or insufficient diversity of database samples, extracted features will be negatively affected. Furthermore, one of the main challenges for efficient facial feature extraction and accurate facial expression recognition is the facial expression datasets, which are usually considerably small compared to other image datasets. To solve these problems, this paper proposes a new approach based on a dual-branch convolutional neural network for facial expression recognition, which is formed by three modules: The two first ones ensure features engineering stage by two branches, and features fusion and classification are performed by the third one. In the first branch, an improved convolutional part of the VGG network is used to benefit from its known robustness, the transfer learning technique with the EfficientNet network is applied in the second branch, to improve the quality of limited training samples in datasets. Finally, and in order to improve the recognition performance, a classification decision will be made based on the fusion of both branches’ feature maps. Based on the experimental results obtained on the Fer2013 and CK+ datasets, the proposed approach shows its superiority compared to several state-of-the-art results as well as using one model at a time. Those results are very competitive, especially for the CK+ dataset, for which the proposed dual branch model reaches an accuracy of 99.32, while for the FER-2013 dataset, the VGG-inspired CNN obtains an accuracy of 67.70, which is considered an acceptable accuracy, given the difficulty of the images of this dataset

    Robust Facial Expression Recognition with Convolutional Visual Transformers

    Full text link
    Facial Expression Recognition (FER) in the wild is extremely challenging due to occlusions, variant head poses, face deformation and motion blur under unconstrained conditions. Although substantial progresses have been made in automatic FER in the past few decades, previous studies are mainly designed for lab-controlled FER. Real-world occlusions, variant head poses and other issues definitely increase the difficulty of FER on account of these information-deficient regions and complex backgrounds. Different from previous pure CNNs based methods, we argue that it is feasible and practical to translate facial images into sequences of visual words and perform expression recognition from a global perspective. Therefore, we propose Convolutional Visual Transformers to tackle FER in the wild by two main steps. First, we propose an attentional selective fusion (ASF) for leveraging the feature maps generated by two-branch CNNs. The ASF captures discriminative information by fusing multiple features with global-local attention. The fused feature maps are then flattened and projected into sequences of visual words. Second, inspired by the success of Transformers in natural language processing, we propose to model relationships between these visual words with global self-attention. The proposed method are evaluated on three public in-the-wild facial expression datasets (RAF-DB, FERPlus and AffectNet). Under the same settings, extensive experiments demonstrate that our method shows superior performance over other methods, setting new state of the art on RAF-DB with 88.14%, FERPlus with 88.81% and AffectNet with 61.85%. We also conduct cross-dataset evaluation on CK+ show the generalization capability of the proposed method

    Customer’s spontaneous facial expression recognition

    Get PDF
    In the field of consumer science, customer facial expression is often categorized either as negative or positive. Customer who portrays negative emotion to a specific product mostly means they reject the product while a customer with positive emotion is more likely to purchase the product. To observe customer emotion, many researchers have studied different perspectives and methodologies to obtain high accuracy results. Conventional neural network (CNN) is used to recognize customer spontaneous facial expressions. This paper aims to recognize customer spontaneous expressions while the customer observed certain products. We have developed a customer service system using a CNN that is trained to detect three types of facial expression, i.e. happy, sad, and neutral. Facial features are extracted together with its histogram of gradient and sliding window. The results are then compared with the existing works and it shows an achievement of 82.9% success rate on average

    Customer’s Spontaneous Facial Expression Recognition

    Get PDF
    In the field of consumer science, customer facial expression is often categorized either as negative or positive. Customer who portrays negative emotion to a specific product mostly means they reject the product while a customer with positive emotion is more likely to purchase the product. To observe customer emotion, many researchers have studied different perspectives and methodologies to obtain high accuracy results. Conventional neural network (CNN) is used to recognize customer spontaneous facial expressions. This paper aims to recognize customer spontaneous expressions while the customer observed certain products. We have developed a customer service system using a CNN that is trained to detect three types of facial expression, i.e. happy, sad, and neutral. Facial features are extracted together with its histogram of gradient and sliding window. The results are then compared with the existing works and it shows an achievement of 82.9% success rate on average

    Detecting the Same Pattern in Choreography Balinese Dance Using Convolutional Neural Network and Analysis Suffix Tree

    Get PDF
    The Balinese dances that are popular today were created by maestros who have existed since time immemorial. To develop the dances made by the existing maestro, one must know the characteristics of each dance based on the motion used. The help of digital image processing and string algorithm analysis methods will help to determine the characteristics of a dance. The algorithm used for dance analysis is the Suffix Tree, where the suffix tree is one of the algorithms that can be used to find patterns from input strings. The string to be analyzed is a series of codes performed by the classifier. The classifier used is Convolutional Neural Network. This method uses an image as its input, which will later perform convolution operations and perform a full-connected layer. The results were obtained using the Convolutional Neural Network method with Alexnet architecture as the classification and confusion matrix to calculate the level of accuracy of the test set, the best accuracy for the head is by using parameter learning rate 0.001, epoch 150, and RGB color space obtained 95% accuracy, 88% precision, 78% recall, and 82% f1-score. For the full body, using a learning rate of 0.01, epoch 150, and RGB color space, the accuracy is 85%, precision is 79%, recall is 64%, and f1-score is 69%. For the legs, using a learning rate of 0.001, epoch 150, and RGB color space, the accuracy is 92%, precision is 84%, recall is 59%, and f1-score is 65%. The results of the suffix tree analysis between codes that use ground truth and classification results have similar values, although the results of the movement patterns obtained by the suffix tree algorithm have not varied, which is dominated by class A because class A is the dominant class in each dance

    Discriminative attention-augmented feature learning for facial expression recognition in the wild

    Get PDF
    Facial expression recognition (FER) in-the-wild is challenging due to unconstraint settings such as varying head poses, illumination, and occlusions. In addition, the performance of a FER system significantly degrades due to large intra-class variation and inter-class similarity of facial expressions in real-world scenarios. To mitigate these problems, we propose a novel approach, Discriminative Attention-augmented Feature Learning Convolution Neural Network (DAF-CNN), which learns discriminative expression-related representations for FER. Firstly, we develop a 3D attention mechanism for feature refinement which selectively focuses on attentive channel entries and salient spatial regions of a convolution neural network feature map. Moreover, a deep metric loss termed Triplet-Center (TC) loss is incorporated to further enhance the discriminative power of the deeply-learned features with an expression-similarity constraint. It simultaneously minimizes intra-class distance and maximizes inter-class distance to learn both compact and separate features. Extensive experiments have been conducted on two representative facial expression datasets (FER-2013 and SFEW 2.0) to demonstrate that DAF-CNN effectively captures discriminative feature representations and achieves competitive or even superior FER performance compared to state-of-the-art FER methods

    Multi-scale fusion visual attention network for facial micro-expression recognition

    Get PDF
    IntroductionMicro-expressions are facial muscle movements that hide genuine emotions. In response to the challenge of micro-expression low-intensity, recent studies have attempted to locate localized areas of facial muscle movement. However, this ignores the feature redundancy caused by the inaccurate locating of the regions of interest.MethodsThis paper proposes a novel multi-scale fusion visual attention network (MFVAN), which learns multi-scale local attention weights to mask regions of redundancy features. Specifically, this model extracts the multi-scale features of the apex frame in the micro-expression video clips by convolutional neural networks. The attention mechanism focuses on the weights of local region features in the multi-scale feature maps. Then, we mask operate redundancy regions in multi-scale features and fuse local features with high attention weights for micro-expression recognition. The self-supervision and transfer learning reduce the influence of individual identity attributes and increase the robustness of multi-scale feature maps. Finally, the multi-scale classification loss, mask loss, and removing individual identity attributes loss joint to optimize the model.ResultsThe proposed MFVAN method is evaluated on SMIC, CASME II, SAMM, and 3DB-Combined datasets that achieve state-of-the-art performance. The experimental results show that focusing on local at the multi-scale contributes to micro-expression recognition.DiscussionThis paper proposed MFVAN model is the first to combine image generation with visual attention mechanisms to solve the combination challenge problem of individual identity attribute interference and low-intensity facial muscle movements. Meanwhile, the MFVAN model reveal the impact of individual attributes on the localization of local ROIs. The experimental results show that a multi-scale fusion visual attention network contributes to micro-expression recognition

    Perubahan kecerahan game 2 dimensi berdasarkan realitas nyata ekspresi wajah Berbasis Metode Convolutional Neural Network

    Get PDF
    Keberhasilan game bergantung pada kondisi pemain. Selama game berlangsung, pemain cenderung memunculkan keadaan mereka melalui ekspresi wajah. Keadaan ini dapat digunakan sebagai pengembangan game untuk dikombinasikan dengan ekspresi wajah pemain, seperti face expression recognition (FER). Dalam penelitian ini membahas tentang game Flappy Bird yang menggunakan FER sebagai simulasi pada perubahan masking layer. Selama permainan berlangsung, webcam menangkap citra wajah secara real-time dan di deteksi ke dalam kategori menggunakan metode CNN. Kemudian output dari hasil deteksi ekspresi dikirim pada sistem game untuk mengubah masking layer dengan tingkat kegelapan masing-masing berdasarkan kelas angry (100 %), happy (5 %), dan neutral (50 %). Aplikasi ini berhasil dijalan pada kedua dataset menghasilkan akurasi sebesar 88% pada dataset CK+ dengan tingkat error 0.39 dan 76 % pada dataset custom dengan tingkat error 0.48 selama 100 epoch. Sehingga mendapatkan nilai dari hasil deteksi FER pada kamera webcam dengan aplikasi flappy bird menunjukkan keberhasilan 78 %. Algoritma CNN dapat menghasilkan nilai error yang rendah dengan tingkat nilai akurasi yang tinggi dalam pengujian kfold cross validation dan ROC
    corecore