11 research outputs found

    FMX (EEPIS FACIAL EXPRESSION MECHANISM EXPERIMENT): PENGENALAN EKSPRESI WAJAH MENGGUNAKAN NEURAL NETWORK BACKPROPAGATION

    Get PDF
    In the near future, it is expected that the robot can interact with humans. Communication itself has many varieties. Not only from word to word, but body language also be the medium. One of them is using facial expressions. Facial expression in human communication is always used to show human emotions. Whether it is happy, sad, angry, shocked, disappointed, or even relaxed? This final project focused on how to make robots that only consist of head, so it could make a variety facial expression like human beings. This Face Humanoid Robot divided into several subsystems. There are image processing subsystem, hardware subsystem and subsystem of controllers. In image processing subsystem, webcam is used for image data acquisition processed by a computer. This process needs Microsoft Visual C compiler for programming that has been installed with the functions of the Open Source Computer Vision Library (OpenCV). Image processing subsystem is used for recognizing human facial expressions. With image processing, it can be seen the pattern of an object. Backpropagation Neural Network is useful to recognize the object pattern. Subsystem hardware is a Humanoid Robot Face. Subsystem controller is a single microcontroller ATMega128 and a camera that can capture images at a distance of 50 to 120 cm. The process of running the robot is as follows. Images captured by a camera webcam. From the images that have been processed with image processing by a computer, human facial expression is obtained. Data results are sent to the subsystem controller via serial communications. Microcontroller subsystem hardware then ordered to make that facial expression. Result of this final project is all of the subsystems can be integrated to make the robot that can respond the form of human expression. The method used is simple but looks quite capable of recognizing human facial expression. Keyword: OpenCV, Neural Network BackPropagation, Humanoid Robo

    A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Get PDF
    In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy

    A Review on Facial Expression Recognition Techniques

    Get PDF
    Facial expression is in the topic of active research over the past few decades. Recognition and extracting various emotions and validating those emotions from the facial expression become very important in human computer interaction. Interpreting such human expression remains and much of the research is required about the way they relate to human affect. Apart from H-I interfaces other applications include awareness system, medical diagnosis, surveillance, law enforcement, automated tutoring system and many more. In the recent year different technique have been put forward for developing automated facial expression recognition system. This paper present quick survey on some of the facial expression recognition techniques. A comparative study is carried out using various feature extraction techniques. We define taxonomy of the field and cover all the steps from face detection to facial expression classification

    FACIAL EXPRESSION RECOGNITION BASED ON CULTURAL PARTICLE SWAMP OPTIMIZATION AND SUPPORT VECTOR MACHINE

    Get PDF
    Facial expressions remain a significant component of human-to-human interface and have the potential to play a correspondingly essential part in human-computer interaction. Support Vector Machine (SVM) by the virtue of its application in a various domain such as bioinformatics, pattern recognition, and other nonlinear problems has a very good generalization capability. However, various studies have proven that its performance drops when applied to problems with large complexities. It consumes a large amount of memory and time when the number of dataset increases. Optimization of SVM parameter can influence and improve its performance.Therefore, a Culture Particle Swarm Optimization (CPSO) techniques is developed to improve the performance of SVM in the facial expression recognition system. CPSO is a hybrid of Cultural Algorithm (CA) and Particle Swarm Optimization (PSO). Six facial expression images each from forty individuals were locally acquired. One hundred and seventy five images were used for training while the remaining sixty five images were used for testing purpose. The results showed a training time of 16.32 seconds, false positive rate of 0%, precision of 100% and an overall accuracy of 92.31% at 250 by 250 pixel resolution. The results obtained establish that CPSO-SVM technique is computational efficient with better precision, accuracy, false positive rate and can construct efficient and realistic facial expression feature that would produce a more reliable security surveillance system in any security prone organization

    Design of a Wearable Eye-Movement Detection System Based on Electrooculography Signals and Its Experimental Validation.

    Full text link
    In the assistive research area, human-computer interface (HCI) technology is used to help people with disabilities by conveying their intentions and thoughts to the outside world. Many HCI systems based on eye movement have been proposed to assist people with disabilities. However, due to the complexity of the necessary algorithms and the difficulty of hardware implementation, there are few general-purpose designs that consider practicality and stability in real life. Therefore, to solve these limitations and problems, an HCI system based on electrooculography (EOG) is proposed in this study. The proposed classification algorithm provides eye-state detection, including the fixation, saccade, and blinking states. Moreover, this algorithm can distinguish among ten kinds of saccade movements (i.e., up, down, left, right, farther left, farther right, up-left, down-left, up-right, and down-right). In addition, we developed an HCI system based on an eye-movement classification algorithm. This system provides an eye-dialing interface that can be used to improve the lives of people with disabilities. The results illustrate the good performance of the proposed classification algorithm. Moreover, the EOG-based system, which can detect ten different eye-movement features, can be utilized in real-life applications

    Automatic Recognition of Facial Displays of Unfelt Emotions

    Get PDF
    Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datas

    Automatic Recognition of Facial Displays of Unfelt Emotions

    Get PDF
    Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average, it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase

    Retos y posibilidades del software de reconocimiento facial como herramienta de autenticación en los entornos virtuales de aprendizaje

    Get PDF
    El presente artículo profundiza en las posibilidades que podría ofrecer una herramienta de reconocimiento facial para garantizar la identidad del usuario en los entornos virtuales de aprendizaje, examinando aquellos aspectos que podrían mejorar en la enseñanza, así como los retos que presenta en relación a la experiencia de usuario. Asimismo, se analiza la aceptación que podría tener por parte de los estudiantes la puesta en marcha de esta aplicación. Para ello, se puso en marcha una prueba piloto basada en un estudio de encuesta con 67 estudiantes que utilizaron esta herramienta dentro de Moodle. Los resultados muestran una valoración positiva, con puntuaciones entre 5,54 y 6,15 en una escala Likert de siete puntos.2015-1
    corecore