25,569 research outputs found

    A novel facial expression recognition method using bi-dimensional EMD based edge detection

    Get PDF
    Facial expressions provide an important channel of nonverbal communication. Facial recognition techniques detect people’s emotions using their facial expressions and have found applications in technical fields such as Human-Computer-Interaction (HCI) and security monitoring. Technical applications generally require fast processing and decision making. Therefore, it is imperative to develop innovative recognition methods that can detect facial expressions effectively and efficiently. Traditionally, human facial expressions are recognized using standard images. Existing methods of recognition require subjective expertise and high computational costs. This thesis proposes a novel method for facial expression recognition using image edge detection based on Bi-dimensional Empirical Mode Decomposition (BEMD). In this research, a BEMD based edge detection algorithm was developed, a facial expression measurement metric was created, and an intensive database testing was conducted. The success rates of recognition suggest that the proposed method could be a potential alternative to traditional methods for human facial expression recognition with substantially lower computational costs. Furthermore, a possible blind-detection technique was proposed as a result of this research. Initial detection results suggest great potential of the proposed method for blind-detection that may lead to even more efficient techniques for facial expression recognition

    Emotion Recognition by Using Bimodal Fusion

    Get PDF
    In order to improve the single-mode emotion recognition rate, the bimodal fusion method based on speech and facial expression was proposed. Here emotion recognition rate can be defined as ratio of number of images properly recognized to the number of input images. Single mode emotion recognition term can be used either for emotion recognition through speech or through facial expression. To increase the rate w e combine these two methods by using bimodal fusion. To do the emotion detection through facial expression we use adaptive sub layer compensation ( ASLC) based facial edge detection method and for emotion detection through speech we use well known SVM . Then bimodal emotion detection is obtained by using probability analysis

    DETECTION OF SOMEONE'S CHARACTER BASED ON FACE SHAPE USING THE CANNY METHOD

    Get PDF
    Character is a unique way of interacting by individuals in creating a relationship. When interacting with people, it requires us to be face to face. The face is a very important element in communicating because from the face we can see a person's expression and the person's facial pattern so that their character can be known. The face is considered a reflection of a person's character so that a science called physiognomy has emerged. Physiognomy science is usually only known by experts, to get an easier way, technology can help provide solutions. The solution is to use a camera by taking a picture of the face whose character you want to understand, then doing a digital image processing (PCD). In this PCD process, there are several processes for processing images in order to obtain information from the image. One way is to use canny edge detection. Canny edge detection is used to identify or recognize object boundary lines in the image after the canny edge detection process is completed. The next process is to recognize face patterns by adding the euclidean distance method so that the face shape pattern can be recognized. The results of facial recognition test using the Canny and Euclidean distance method from 40 facial images, the percentage of success is 80%. &nbsp

    Artificial Intelligence Tools for Facial Expression Analysis.

    Get PDF
    Inner emotions show visibly upon the human face and are understood as a basic guide to an individual’s inner world. It is, therefore, possible to determine a person’s attitudes and the effects of others’ behaviour on their deeper feelings through examining facial expressions. In real world applications, machines that interact with people need strong facial expression recognition. This recognition is seen to hold advantages for varied applications in affective computing, advanced human-computer interaction, security, stress and depression analysis, robotic systems, and machine learning. This thesis starts by proposing a benchmark of dynamic versus static methods for facial Action Unit (AU) detection. AU activation is a set of local individual facial muscle parts that occur in unison constituting a natural facial expression event. Detecting AUs automatically can provide explicit benefits since it considers both static and dynamic facial features. For this research, AU occurrence activation detection was conducted by extracting features (static and dynamic) of both nominal hand-crafted and deep learning representation from each static image of a video. This confirmed the superior ability of a pretrained model that leaps in performance. Next, temporal modelling was investigated to detect the underlying temporal variation phases using supervised and unsupervised methods from dynamic sequences. During these processes, the importance of stacking dynamic on top of static was discovered in encoding deep features for learning temporal information when combining the spatial and temporal schemes simultaneously. Also, this study found that fusing both temporal and temporal features will give more long term temporal pattern information. Moreover, we hypothesised that using an unsupervised method would enable the leaching of invariant information from dynamic textures. Recently, fresh cutting-edge developments have been created by approaches based on Generative Adversarial Networks (GANs). In the second section of this thesis, we propose a model based on the adoption of an unsupervised DCGAN for the facial features’ extraction and classification to achieve the following: the creation of facial expression images under different arbitrary poses (frontal, multi-view, and in the wild), and the recognition of emotion categories and AUs, in an attempt to resolve the problem of recognising the static seven classes of emotion in the wild. Thorough experimentation with the proposed cross-database performance demonstrates that this approach can improve the generalization results. Additionally, we showed that the features learnt by the DCGAN process are poorly suited to encoding facial expressions when observed under multiple views, or when trained from a limited number of positive examples. Finally, this research focuses on disentangling identity from expression for facial expression recognition. A novel technique was implemented for emotion recognition from a single monocular image. A large-scale dataset (Face vid) was created from facial image videos which were rich in variations and distribution of facial dynamics, appearance, identities, expressions, and 3D poses. This dataset was used to train a DCNN (ResNet) to regress the expression parameters from a 3D Morphable Model jointly with a back-end classifier

    Pose-disentangled Contrastive Learning for Self-supervised Facial Representation

    Full text link
    Self-supervised facial representation has recently attracted increasing attention due to its ability to perform face understanding without relying on large-scale annotated datasets heavily. However, analytically, current contrastive-based self-supervised learning still performs unsatisfactorily for learning facial representation. More specifically, existing contrastive learning (CL) tends to learn pose-invariant features that cannot depict the pose details of faces, compromising the learning performance. To conquer the above limitation of CL, we propose a novel Pose-disentangled Contrastive Learning (PCL) method for general self-supervised facial representation. Our PCL first devises a pose-disentangled decoder (PDD) with a delicately designed orthogonalizing regulation, which disentangles the pose-related features from the face-aware features; therefore, pose-related and other pose-unrelated facial information could be performed in individual subnetworks and do not affect each other's training. Furthermore, we introduce a pose-related contrastive learning scheme that learns pose-related information based on data augmentation of the same image, which would deliver more effective face-aware representation for various downstream tasks. We conducted a comprehensive linear evaluation on three challenging downstream facial understanding tasks, i.e., facial expression recognition, face recognition, and AU detection. Experimental results demonstrate that our method outperforms cutting-edge contrastive and other self-supervised learning methods with a great margin

    Face detection in profile views using fast discrete curvelet transform (FDCT) and support vector machine (SVM)

    Get PDF
    Human face detection is an indispensable component in face processing applications, including automatic face recognition, security surveillance, facial expression recognition, and the like. This paper presents a profile face detection algorithm based on curvelet features, as curvelet transform offers good directional representation and can capture edge information in human face from different angles. First, a simple skin color segmentation scheme based on HSV (Hue - Saturation - Value) and YCgCr (luminance - green chrominance - red chrominance) color models is used to extract skin blocks. The segmentation scheme utilizes only the S and CgCr components, and is therefore luminance independent. Features extracted from three frequency bands from curvelet decomposition are used to detect face in each block. A support vector machine (SVM) classifier is trained for the classification task. In the performance test, the results showed that the proposed algorithm can detect profile faces in color images with good detection rate and low misdetection rate

    FMX (EEPIS FACIAL EXPRESSION MECHANISM EXPERIMENT): PENGENALAN EKSPRESI WAJAH MENGGUNAKAN NEURAL NETWORK BACKPROPAGATION

    Get PDF
    In the near future, it is expected that the robot can interact with humans. Communication itself has many varieties. Not only from word to word, but body language also be the medium. One of them is using facial expressions. Facial expression in human communication is always used to show human emotions. Whether it is happy, sad, angry, shocked, disappointed, or even relaxed? This final project focused on how to make robots that only consist of head, so it could make a variety facial expression like human beings. This Face Humanoid Robot divided into several subsystems. There are image processing subsystem, hardware subsystem and subsystem of controllers. In image processing subsystem, webcam is used for image data acquisition processed by a computer. This process needs Microsoft Visual C compiler for programming that has been installed with the functions of the Open Source Computer Vision Library (OpenCV). Image processing subsystem is used for recognizing human facial expressions. With image processing, it can be seen the pattern of an object. Backpropagation Neural Network is useful to recognize the object pattern. Subsystem hardware is a Humanoid Robot Face. Subsystem controller is a single microcontroller ATMega128 and a camera that can capture images at a distance of 50 to 120 cm. The process of running the robot is as follows. Images captured by a camera webcam. From the images that have been processed with image processing by a computer, human facial expression is obtained. Data results are sent to the subsystem controller via serial communications. Microcontroller subsystem hardware then ordered to make that facial expression. Result of this final project is all of the subsystems can be integrated to make the robot that can respond the form of human expression. The method used is simple but looks quite capable of recognizing human facial expression. Keyword: OpenCV, Neural Network BackPropagation, Humanoid Robo
    • …
    corecore