523 research outputs found

    Improved facial feature fitting for model based coding and animation

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking

    Get PDF
    Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications

    EMG-to-Speech: Direct Generation of Speech from Facial Electromyographic Signals

    Get PDF
    The general objective of this work is the design, implementation, improvement and evaluation of a system that uses surface electromyographic (EMG) signals and directly synthesizes an audible speech output: EMG-to-speech

    Unifying the Visible and Passive Infrared Bands: Homogeneous and Heterogeneous Multi-Spectral Face Recognition

    Get PDF
    Face biometrics leverages tools and technology in order to automate the identification of individuals. In most cases, biometric face recognition (FR) can be used for forensic purposes, but there remains the issue related to the integration of technology into the legal system of the court. The biggest challenge with the acceptance of the face as a modality used in court is the reliability of such systems under varying pose, illumination and expression, which has been an active and widely explored area of research over the last few decades (e.g. same-spectrum or homogeneous matching). The heterogeneous FR problem, which deals with matching face images from different sensors, should be examined for the benefit of military and law enforcement applications as well. In this work we are concerned primarily with visible band images (380-750 nm) and the infrared (IR) spectrum, which has become an area of growing interest.;For homogeneous FR systems, we formulate and develop an efficient, semi-automated, direct matching-based FR framework, that is designed to operate efficiently when face data is captured using either visible or passive IR sensors. Thus, it can be applied in both daytime and nighttime environments. First, input face images are geometrically normalized using our pre-processing pipeline prior to feature-extraction. Then, face-based features including wrinkles, veins, as well as edges of facial characteristics, are detected and extracted for each operational band (visible, MWIR, and LWIR). Finally, global and local face-based matching is applied, before fusion is performed at the score level. Although this proposed matcher performs well when same-spectrum FR is performed, regardless of spectrum, a challenge exists when cross-spectral FR matching is performed. The second framework is for the heterogeneous FR problem, and deals with the issue of bridging the gap across the visible and passive infrared (MWIR and LWIR) spectrums. Specifically, we investigate the benefits and limitations of using synthesized visible face images from thermal and vice versa, in cross-spectral face recognition systems when utilizing canonical correlation analysis (CCA) and locally linear embedding (LLE), a manifold learning technique for dimensionality reduction. Finally, by conducting an extensive experimental study we establish that the combination of the proposed synthesis and demographic filtering scheme increases system performance in terms of rank-1 identification rate

    A system for recognizing human emotions based on speech analysis and facial feature extraction: applications to Human-Robot Interaction

    Get PDF
    With the advance in Artificial Intelligence, humanoid robots start to interact with ordinary people based on the growing understanding of psychological processes. Accumulating evidences in Human Robot Interaction (HRI) suggest that researches are focusing on making an emotional communication between human and robot for creating a social perception, cognition, desired interaction and sensation. Furthermore, robots need to receive human emotion and optimize their behavior to help and interact with a human being in various environments. The most natural way to recognize basic emotions is extracting sets of features from human speech, facial expression and body gesture. A system for recognition of emotions based on speech analysis and facial features extraction can have interesting applications in Human-Robot Interaction. Thus, the Human-Robot Interaction ontology explains how the knowledge of these fundamental sciences is applied in physics (sound analyses), mathematics (face detection and perception), philosophy theory (behavior) and robotic science context. In this project, we carry out a study to recognize basic emotions (sadness, surprise, happiness, anger, fear and disgust). Also, we propose a methodology and a software program for classification of emotions based on speech analysis and facial features extraction. The speech analysis phase attempted to investigate the appropriateness of using acoustic (pitch value, pitch peak, pitch range, intensity and formant), phonetic (speech rate) properties of emotive speech with the freeware program PRAAT, and consists of generating and analyzing a graph of speech signals. The proposed architecture investigated the appropriateness of analyzing emotive speech with the minimal use of signal processing algorithms. 30 participants to the experiment had to repeat five sentences in English (with durations typically between 0.40 s and 2.5 s) in order to extract data relative to pitch (value, range and peak) and rising-falling intonation. Pitch alignments (peak, value and range) have been evaluated and the results have been compared with intensity and speech rate. The facial feature extraction phase uses the mathematical formulation (B\ue9zier curves) and the geometric analysis of the facial image, based on measurements of a set of Action Units (AUs) for classifying the emotion. The proposed technique consists of three steps: (i) detecting the facial region within the image, (ii) extracting and classifying the facial features, (iii) recognizing the emotion. Then, the new data have been merged with reference data in order to recognize the basic emotion. Finally, we combined the two proposed algorithms (speech analysis and facial expression), in order to design a hybrid technique for emotion recognition. Such technique have been implemented in a software program, which can be employed in Human-Robot Interaction. The efficiency of the methodology was evaluated by experimental tests on 30 individuals (15 female and 15 male, 20 to 48 years old) form different ethnic groups, namely: (i) Ten adult European, (ii) Ten Asian (Middle East) adult and (iii) Ten adult American. Eventually, the proposed technique made possible to recognize the basic emotion in most of the cases

    A Textbook of Advanced Oral and Maxillofacial Surgery

    Get PDF
    The scope of OMF surgery has expanded; encompassing treatment of diseases, disorders, defects and injuries of the head, face, jaws and oral cavity. This internationally-recognized specialty is evolving with advancements in technology and instrumentation. Specialists of this discipline treat patients with impacted teeth, facial pain, misaligned jaws, facial trauma, oral cancer, cysts and tumors; they also perform facial cosmetic surgery and place dental implants. The contents of this volume essentially complements the volume 1; with chapters that cover both basic and advanced concepts on complex topics in oral and maxillofacial surgery

    A head model with anatomical structure for facial modelling and animation

    Get PDF
    In this dissertation, I describe a virtual head model with anatomical structure. The model is animated in a physical-based manner by use of muscle contractions that in turn cause skin deformations; the simulation is efficient enough to achieve real-time frame rates on current PC hardware. Construction of head models is eased in my approach by deriving new models from a prototype, employing a deformation method that reshapes the complete virtual head structure. Without additional modeling tasks, this results in an immediately animatable model. The general deformation method allows for several applications such as adaptation to individual scan data for creation of animated head models of real persons. The basis for the deformation method is a set of facial feature points, which leads to other interesting uses when this set is chosen according to an anthropometric standard set of facial landmarks: I present algorithms for simulation of human head growth and reconstruction of a face from a skull.In dieser Dissertation beschreibe ich ein nach der menschlichen Anatomie strukturiertes virtuelles Kopfmodell. Dieses Modell wird physikbasiert durch Muskelkontraktionen bewegt, die wiederum Hautdeformationen hervorrufen; die Simulation ist effizient genug, um Echtzeitanimation auf aktueller PC-Hardware zu ermöglichen. Die Konstruktion eines Kopfmodells wird in meinem Ansatz durch Ableitung von einem Prototypen erleichtert, wozu eine Deformationstechnik verwendet wird, die die gesamte Struktur des virtuellen Kopfes transformiert. Ein vollstĂ€ndig animierbares Modell entsteht so ohne weitere Modellierungsschritte. Die allgemeine Deformationsmethode gestattet eine Vielzahl von Anwendungen, wie beispielsweise die Anpassung an individuelle Scandaten fĂŒr die Erzeugung von animierten Kopfmodellen realer Personen. Die Deformationstechnik basiert auf einer Menge von Markierungspunkten im Gesicht, was zu weiteren interessanten Einsatzgebieten fĂŒhrt, wenn diese mit Standard- Meßpunkten aus der Anthropometrie identifiziert werden: Ich stelle Algorithmen zur Simulation des menschlichen Kopfwachstums sowie der Rekonstruktion eines Gesichtes aus SchĂ€deldaten vor

    Three Dimensional (3D) Forensic Facial Reconstruction in an Egyptian Population using Computed Tomography Scanned Skulls and Average Facial Templates: A Study Examining Subjective and Objective Assessment Methods of 3D Forensic Facial Reconstructions

    Get PDF
    PhDForensic facial reconstruction can assist identification by reconstructing a face of the unknown person with the aim of its recognition by his/her family or friends. In the facial reconstruction approach adopted in this study, a 3D average face template was digitally warped onto a 3D scanned skull image. This study was carried out entirely on an Egyptian population, and was the first of its kind. Aims: This study aimed to demonstrate that 3D facial reconstructions using the novel methodology described could show significant resemblance to the faces corresponding to the persons in question when they were alive. Moreover, using techniques previously validated for facial reconstruction, the aim was to compare them to the method developed, and to assess approaches used to determine the accuracy of 3D facial reconstructions. Methods: Initially, a pilot study was conducted using a database of laser scanned skulls and faces. The faces were reconstructed using an average facial template generated by merging a number of faces of similar population, sex, and age. The applicability, as well as the main components of the facial reconstruction method, the single and average facial templates, and the facial soft tissue thickness measurements, were investigated. Furthermore, in the main study, the faces of computed tomography (CT) scanned heads of an Egyptian population were reconstructed using average facial templates. The accuracy of the reconstructed faces was assessed subjectively by face pool, and face resemblance tests, and objectively by measuring the surface distances between the real and reconstructed faces. In addition, a number of novel subjective and objective assessment methods were developed. These included assessment of individual facial regions using subjective resemblance scores, and objective surface distance comparisons. A new objective method, craniofacial anthropometry, was developed by taking and comparing direct measurements from the skull, and comparing the measurements from the real and reconstructed faces. The studied cases were ranked according to all subjective, and objective, tests, and statistically correlated. Results and Conclusions: The average facial templates showed a higher identification rate than the single face templates. The approach of facial reconstruction used in this thesis showed a comparable accuracy to many other facial reconstruction methods, yet was superior in terms of its applicability, transferability, and ease of use. In the face pool tests, the younger assessors were able to correctly identify the reconstructed faces better than older assessors. Furthermore, the identification rate by the forensic anthropology experts was higher than the non-experts. The former group showed the highest agreement between the observers in giving the resemblance scores. Although there was a significant rank correlation between the subjective and objective assessment tests, the subjective tests are influenced by the assessors’ subjective characteristics (e.g., age, professional experience), thus making objective assessment more reliable. However, in situations where subjective tests are used, it is better to use the face resemblance tests and consult forensic anthropologists. Also, Craniofacial Anthropometry, particularly the craniofacial angles, can successfully indicate the accuracy of the facial reconstructions. Importantly, this study shows that certain facial regions, particularly the cheek and the jaw, are more reliable than other areas in the subjective and objective assessment of the facial reconstructionEgyptian Ministry of Higher Education and the Egyptian Cultural Affairs and the Mission Sector, as well as the Egyptian cultural Counsellor and the staff of the Egyptian Cultural Centre and Educational Bureau in London, UK
    • 

    corecore