5 research outputs found

    Muscle Sensor Model Using Small Scale Optical Device for Pattern Recognitions

    Get PDF
    A new sensor system for measuring contraction and relaxation of muscles by using a PANDA ring resonator is proposed. The small scale optical device is designed and configured to perform the coupling effects between the changes in optical device phase shift and human facial muscle movement, which can be used to form the relationship between optical phase shift and muscle movement. By using the Optiwave and MATLAB programs, the results obtained have shown that the measurement of the contraction and relaxation of muscles can be obtained after the muscle movements, in which the unique pattern of individual muscle movement from facial expression can be established. The obtained simulation results, that is, interference signal patterns, can be used to form the various pattern recognitions, which are useful for the human machine interface and the human computer interface application and discussed in detail

    A system for recognizing human emotions based on speech analysis and facial feature extraction: applications to Human-Robot Interaction

    Get PDF
    With the advance in Artificial Intelligence, humanoid robots start to interact with ordinary people based on the growing understanding of psychological processes. Accumulating evidences in Human Robot Interaction (HRI) suggest that researches are focusing on making an emotional communication between human and robot for creating a social perception, cognition, desired interaction and sensation. Furthermore, robots need to receive human emotion and optimize their behavior to help and interact with a human being in various environments. The most natural way to recognize basic emotions is extracting sets of features from human speech, facial expression and body gesture. A system for recognition of emotions based on speech analysis and facial features extraction can have interesting applications in Human-Robot Interaction. Thus, the Human-Robot Interaction ontology explains how the knowledge of these fundamental sciences is applied in physics (sound analyses), mathematics (face detection and perception), philosophy theory (behavior) and robotic science context. In this project, we carry out a study to recognize basic emotions (sadness, surprise, happiness, anger, fear and disgust). Also, we propose a methodology and a software program for classification of emotions based on speech analysis and facial features extraction. The speech analysis phase attempted to investigate the appropriateness of using acoustic (pitch value, pitch peak, pitch range, intensity and formant), phonetic (speech rate) properties of emotive speech with the freeware program PRAAT, and consists of generating and analyzing a graph of speech signals. The proposed architecture investigated the appropriateness of analyzing emotive speech with the minimal use of signal processing algorithms. 30 participants to the experiment had to repeat five sentences in English (with durations typically between 0.40 s and 2.5 s) in order to extract data relative to pitch (value, range and peak) and rising-falling intonation. Pitch alignments (peak, value and range) have been evaluated and the results have been compared with intensity and speech rate. The facial feature extraction phase uses the mathematical formulation (B\ue9zier curves) and the geometric analysis of the facial image, based on measurements of a set of Action Units (AUs) for classifying the emotion. The proposed technique consists of three steps: (i) detecting the facial region within the image, (ii) extracting and classifying the facial features, (iii) recognizing the emotion. Then, the new data have been merged with reference data in order to recognize the basic emotion. Finally, we combined the two proposed algorithms (speech analysis and facial expression), in order to design a hybrid technique for emotion recognition. Such technique have been implemented in a software program, which can be employed in Human-Robot Interaction. The efficiency of the methodology was evaluated by experimental tests on 30 individuals (15 female and 15 male, 20 to 48 years old) form different ethnic groups, namely: (i) Ten adult European, (ii) Ten Asian (Middle East) adult and (iii) Ten adult American. Eventually, the proposed technique made possible to recognize the basic emotion in most of the cases

    Automatic facial expression recognition based on spatiotemporal descriptors

    No full text
    International audienceFacial expression’s machine analysis is one of the most challenging problems in Human–Computer Interaction (HCI). Naturally, facial expressions depend on subtle movements of facial muscles to show emotional states. After having studied the relations between basic expressions and corresponding facial deformation models, we propose two new textons, VTB and moments on spatiotemporal plane, to describe the transformation of human face during facial expressions. These descriptors aim at catching both general shape changes and motion texture details. Therefore, modelling the temporal behaviour of facial expression captures the dynamic deformation of facial components. Finally, SVM based system is used to efficiently recognize the expression for a single image in sequence. Then, the probabilities of all the frames are used to predict the class of the current sequence. The experimental results are evaluated on both Cohan–Kanade and MMI databases. By comparison to other methods, the effectiveness of our method is clearly demonstrate
    corecore