16 research outputs found

    Implementasi Convolutional Neural Networks (CNN) untuk Klasifikasi Ekspresi Citra Wajah pada FER-2013 Dataset

    Get PDF
    Abstract - session recognition is an interesting topic, where facial expressions in today's technological advances can support several fields such as health, business, and so on. Facial expression recognition can be done using the extraction of certain features. Meanwhile, Convolutional Neural Network (CNN) can recognize an object in the image through the features found by itself in the convolution process. By using CNN's advantages, this study aims to see CNN's performance in facial expressions of happiness and sadness in unideal data conditions. So based on this research, on the FER2013 dataset, CNN using the Adamax optimizer produced a fairly good performance where the value is given is 66% compared to Adam, N-Adam, and SGD.Keywords  -   CNN, Facial Expression, FER-2013 Abstrak – Pengenalan ekspresi merupakan topik penelitian yang menarik, dimana peran ekspresi wajah dalam kemajuan teknologi saat ini dapat mendukung beberapa bidang seperti kesehatan, bisnis, dan sebagainya. Pengenalan wajah dapat dilakukan dengan menggunakan ekstraksi fitur-fitur tertentu. Sementara itu, Convolutional Neural Network (CNN) dapat mengenali objek pada citra melalui fitur yang ditemukannya sendiri dalam proses konvolusinya. Dengan menggunakan keunggulan CNN, maka penelitian ini bertujuan untuk mengetahui performa CNN dalam mengenali ekspresi wajah bahagia (happy) dan sedih (sad) pada kondisi data tidak ideal. Maka berdasarkan hasil penelitian ini, pada dataset FER2013, CNN dengan menggunakan Adamax optimizer menghasilkan performa yang cukup baik dimana akurasi yang diberikan adalah sebesar 66% dibandingkan dengan Adam, N-Adam, dan SGD.Kata Kunci - CNN, Ekspresi Wajah, FER-2013

    An examination of applicability of face recognition sensors in public facilities

    Get PDF
    Purpose: This study aimed to explore the usability and applicability of face recognition sensors in public spaces to collect customer footfall data, which could then be analysed and evaluated for facility design and planning Methodology: Nine OMRON sensors were provided for the project and installed at five locations in a public facility for three months. The project was carried out by a local consortium with the cooperation of local technology-based Small Medium sized Enterprises (SMEs), business organisations, and a local university. Collected data was analysed using data-mining software to develop a result report with diagrams, and reveal issues and potential for practical application in the future. Findings: It has been found that this technology could be applied for further consumer behavioural analysis, for example, analysing the relationship between product displays and purchasing behaviour, or looking at the link between consumers’ attributes and their buying behaviour. Moreover, the collected data can be further studied to develop a more detailed analysis of the relationships between the data collected from different points of installation. A critical issue found was about how to protect the privacy of the people whose data the sensors collected (i.e., image rights, and other privacy-related issues), which suggests the need for guidelines on ethical data collection and raises questions on how to get agreement from potential participants in the experiment. Implication and limitation: Although it was acknowledged that this project remained at pilot level and would need to expand before more robust implications and recommendations could be developed, the experimental outcome suggests that face recognition sensors have the potential for commercial use. Collecting and analysing customers’ behavioural data can contribute to marketing strategy and planning. The study also discusses the necessity of enhancing business opportunities through open innovation, in this case based on a consortium inviting local technology-oriented SMEs, universities, and other stakeholders to support the local economy. The implications of this study could inspire others to start new businesses and to support the local economy and small enterprises

    Deception/Truthful Prediction Based on Facial Feature and Machine Learning Analysis

    Get PDF
    The Automatic Deception detection refers to the investigative practices used to determine whether person is telling you Truth or lie. Automatic deception detection has been studied extensively as it can be useful in many real-life scenarios in health, justice, and security systems. Many psychological studies have been reported for deception detection.  Polygraph testing is a current trending technique to detect deception, but it requires human intervention and training.  In recent times, many machine learning based approaches have been applied to detect deceptions. Various modalities like Thermal Imaging, Brain Activity Mapping, Acoustic analysis, eye tracking. Facial Micro expression processing and linguistic analyses are used to detect deception. Machine learning techniques based on facial feature analysis look like a promising path for automatic deception detection. It also works without human intervention. So, it may give better results because it does not affect race or ethnicity. Moreover, one can do covert operation to find deceit using facial video recording. Covert Operation may capture the real personality of deceptive persons. By making combination of various facial features like Facial Emotion, Facial Micro Expressions and Eye blink rate, pupil size, Facial Action Units we can get better accuracy in Deception Detection

    Emotional Intelligence in Robotics: A Scoping Review

    Get PDF
    Research suggests that emotionally responsive machines that can simulate empathy increase de acceptance of users towards them, as the feeling of affinity towards the machine reduces negative perceptual feedback. In order to endow a robot with emotional intelligence, it must be equipped with sensors capable of capturing users’ emotions (sense), appraisal captured emotions to regulate its internal state (compute), and finally perform tasks where actions are regulated by the computed “emotional” state (act). However, despite the im-pressive progress made in recent years in terms of artificial intelligence, speech recognition and synthesis, computer vision and many other disciplines directly and indirectly related to artificial emotional recognition and behavior, we are still far from being able to endow robots with the empathic capabilities of a human being. This article aims to give an overview of the implications of intro-ducing emotional intelligence in robotic constructions by discussing recent ad-vances in emotional intelligence in robotics

    Integrating Emotion Recognition Tools for Developing Emotionally Intelligent Agents

    Get PDF
    Emotionally responsive agents that can simulate emotional intelligence increase the acceptance of users towards them, as the feeling of empathy reduces negative perceptual feedback. This has fostered research on emotional intelligence during last decades, and nowadays numerous cloud and local tools for automatic emotional recognition are available, even for inexperienced users. These tools however usually focus on the recognition of discrete emotions sensed from one communication channel, even though multimodal approaches have been shown to have advantages over unimodal approaches. Therefore, the objective of this paper is to show our approach for multimodal emotion recognition using Kalman filters for the fusion of available discrete emotion recognition tools. The proposed system has been modularly developed based on an evolutionary approach so to be integrated in our digital ecosystems, and new emotional recognition sources can be easily integrated. Obtained results show improvements over unimodal tools when recognizing naturally displayed emotions

    Facial Expression Recognition Based on Deep Learning Convolution Neural Network: A Review

    Get PDF
    Facial emotional processing is one of the most important activities in effective calculations, engagement with people and computers, machine vision, video game testing, and consumer research. Facial expressions are a form of nonverbal communication, as they reveal a person's inner feelings and emotions. Extensive attention to Facial Expression Recognition (FER) has recently been received as facial expressions are considered. As the fastest communication medium of any kind of information. Facial expression recognition gives a better understanding of a person's thoughts or views and analyzes them with the currently trending deep learning methods. Accuracy rate sharply compared to traditional state-of-the-art systems. This article provides a brief overview of the different FER fields of application and publicly accessible databases used in FER and studies the latest and current reviews in FER using Convolution Neural Network (CNN) algorithms. Finally, it is observed that everyone reached good results, especially in terms of accuracy, with different rates, and using different data sets, which impacts the results

    A Comparative Emotions-detection Review for Non-intrusive Vision-Based Facial Expression Recognition

    Get PDF
    Affective computing advocates for the development of systems and devices that can recognize, interpret, process, and simulate human emotion. In computing, the field seeks to enhance the user experience by finding less intrusive automated solutions. However, initiatives in this area focus on solitary emotions that limit the scalability of the approaches. Further reviews conducted in this area have also focused on solitary emotions, presenting challenges to future researchers when adopting these recommendations. This review aims at highlighting gaps in the application areas of Facial Expression Recognition Techniques by conducting a comparative analysis of various emotion detection datasets, algorithms, and results provided in existing studies. The systematic review adopted the PRISMA model and analyzed eighty-three publications. Findings from the review show that different emotions call for different Facial Expression Recognition techniques, which should be analyzed when conducting Facial Expression Recognition. Keywords: Facial Expression Recognition, Emotion Detection, Image Processing, Computer Visio

    Techniques for facial affective computing: A review

    Get PDF
    Facial affective computing has gained popularity and become a progressive research area, as it plays a key role in human-computer interaction. However, many researchers lack the right technique to carry out a reliable facial affective computing effectively. To address this issue, we presented a review of the state-of-the-art artificial intelligence techniques that are being used for facial affective computing. Three research questions were answered by studying and analysing related papers collected from some well-established scientific databases based on some exclusion and inclusion criteria. The result presented the common artificial intelligence approaches for face detection, face recognition and emotion detection. The paper finds out that the haar-cascade algorithm has outperformed all the algorithms that have been used for face detection, the Convolutional Neural Network (CNN) based algorithms have performed best in face recognition, and the neural network algorithm with multiple layers has the best performance in emotion detection. A limitation of this research is the access to some research papers, as some documents require a high subscription cost. Practice implication: The paper provides a comprehensive and unbiased analysis of existing literature, identifying knowledge gaps and future research direction and supports evidence-based decision-making. We considered articles and conference papers from well-established databases. The method presents a novel scope for facial affective computing and provides decision support for researchers when selecting plans for facial affective computing

    Hybrid system of emotion evaluation in physiotherapeutic procedures

    Get PDF
    Nowadays, the dynamic development of technology allows for the design of systems based on various information sources and their integration into hybrid expert systems. One of the areas of research where such systems are especially helpful is emotion analysis. The sympathetic nervous system controls emotions, while its function is directly reflected by the electrodermal activity (EDA) signal. The presented study aimed to develop a tool and propose a physiological data set to complement the psychological data. The study group consisted of 41 students aged from 19 to 26 years. The presented research protocol was based on the acquisition of the electrodermal activity signal using the Empatica E4 device during three exercises performed in a prototype Disc4Spine system and using the psychological research methods. Different methods (hierarchical and non-hierarchical) of subsequent data clustering and optimisation in the context of emotions experienced were analysed. The best results were obtained for the k-means classifier during Exercise 3 (80.49%) and for the combination of the EDA signal with negative emotions (80.48%). A comparison of accuracy of the k-means classification with the independent division made by a psychologist revealed again the best results for negative emotions (78.05%)
    corecore