191 research outputs found

    Photorealistic retrieval of occluded facial information using a performance-driven face model

    Get PDF
    Facial occlusions can cause both human observers and computer algorithms to fail in a variety of important tasks such as facial action analysis and expression classification. This is because the missing information is not reconstructed accurately enough for the purpose of the task in hand. Most current computer methods that are used to tackle this problem implement complex three-dimensional polygonal face models that are generally timeconsuming to produce and unsuitable for photorealistic reconstruction of missing facial features and behaviour. In this thesis, an image-based approach is adopted to solve the occlusion problem. A dynamic computer model of the face is used to retrieve the occluded facial information from the driver faces. The model consists of a set of orthogonal basis actions obtained by application of principal component analysis (PCA) on image changes and motion fields extracted from a sequence of natural facial motion (Cowe 2003). Examples of occlusion affected facial behaviour can then be projected onto the model to compute coefficients of the basis actions and thus produce photorealistic performance-driven animations. Visual inspection shows that the PCA face model recovers aspects of expressions in those areas occluded in the driver sequence, but the expression is generally muted. To further investigate this finding, a database of test sequences affected by a considerable set of artificial and natural occlusions is created. A number of suitable metrics is developed to measure the accuracy of the reconstructions. Regions of the face that are most important for performance-driven mimicry and that seem to carry the best information about global facial configurations are revealed using Bubbles, thus in effect identifying facial areas that are most sensitive to occlusions. Recovery of occluded facial information is enhanced by applying an appropriate scaling factor to the respective coefficients of the basis actions obtained by PCA. This method improves the reconstruction of the facial actions emanating from the occluded areas of the face. However, due to the fact that PCA produces bases that encode composite, correlated actions, such an enhancement also tends to affect actions in non-occluded areas of the face. To avoid this, more localised controls for facial actions are produced using independent component analysis (ICA). Simple projection of the data onto an ICA model is not viable due to the non-orthogonality of the extracted bases. Thus occlusion-affected mimicry is first generated using the PCA model and then enhanced by accordingly manipulating the independent components that are subsequently extracted from the mimicry. This combination of methods yields significant improvements and results in photorealistic reconstructions of occluded facial actions

    Computer analysis of face beauty: a survey

    Get PDF
    The human face conveys to other human beings, and potentially to computer systems, information such as identity, intentions, emotional and health states, attractiveness, age, gender and ethnicity. In most cases analyzing this information involves the computer science as well as the human and medical sciences. The most studied multidisciplinary problems are analyzing emotions, estimating age and modeling aging effects. An emerging area is the analysis of human attractiveness. The purpose of this paper is to survey recent research on the computer analysis of human beauty. First we present results in human sciences and medicine pointing to a largely shared and data-driven perception of attractiveness, which is a rationale of computer beauty analysis. After discussing practical application areas, we survey current studies on the automatic analysis of facial attractiveness aimed at: i) relating attractiveness to particular facial features; ii) assessing attractiveness automatically; iii) improving the attractiveness of 2D or 3D face images. Finally we discuss open problems and possible lines of research

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Timing is everything: A spatio-temporal approach to the analysis of facial actions

    No full text
    This thesis presents a fully automatic facial expression analysis system based on the Facial Action Coding System (FACS). FACS is the best known and the most commonly used system to describe facial activity in terms of facial muscle actions (i.e., action units, AUs). We will present our research on the analysis of the morphological, spatio-temporal and behavioural aspects of facial expressions. In contrast with most other researchers in the field who use appearance based techniques, we use a geometric feature based approach. We will argue that that approach is more suitable for analysing facial expression temporal dynamics. Our system is capable of explicitly exploring the temporal aspects of facial expressions from an input colour video in terms of their onset (start), apex (peak) and offset (end). The fully automatic system presented here detects 20 facial points in the first frame and tracks them throughout the video. From the tracked points we compute geometry-based features which serve as the input to the remainder of our systems. The AU activation detection system uses GentleBoost feature selection and a Support Vector Machine (SVM) classifier to find which AUs were present in an expression. Temporal dynamics of active AUs are recognised by a hybrid GentleBoost-SVM-Hidden Markov model classifier. The system is capable of analysing 23 out of 27 existing AUs with high accuracy. The main contributions of the work presented in this thesis are the following: we have created a method for fully automatic AU analysis with state-of-the-art recognition results. We have proposed for the first time a method for recognition of the four temporal phases of an AU. We have build the largest comprehensive database of facial expressions to date. We also present for the first time in the literature two studies for automatic distinction between posed and spontaneous expressions

    Shading with Painterly Filtered Layers: A Process to Obtain Painterly Portraits

    Get PDF
    In this thesis, I study how color data from different styles of paintings can be extracted from photography with the end result maintaining the artistic integrity of the art style and having the look and feel of skin. My inspiration for this work came from the impasto style portraitures of painters such as Rembrandt and Greg Cartmell. I analyzed and studied the important visual characteristics of both Rembrandt’s and Cartmell’s styles of painting.These include how the artist develops shadow and shading, creates the illusion of subsurface scattering, and applies color to the canvas, which will be used as references to help develop the final renders in computer graphics. I also examined how color information can be extracted from portrait photography in order to gather accurate dark, medium, and light skin shades. Based on this analysis, I have developed a process for creating portrait paintings from 3D facial models. My process consists of four stages: (1) Modeling a 3D portrait of the subject, (2) data collection by photographing the subjects, (3) Barycentric shader development using photographs, and (4) Compositing with filtered layers. My contributions has been in stages (3) and (4) as follows: Development of an impasto-style Barycentric shader by extracting color information from gathered photographic images. This shader can result in realistic looking skin rendering. Development of a compositing technique that involves filtering layers of images that correspond to different effects such as diffuse, specular and ambient. To demonstrate proof-of-concept, I have created a few animations of the impasto style portrait painting for a single subject. For these animations, I have also sculpted high polygon count 3D model of the torso and head of my subject. Using my shading and compositing techniques, I have created rigid body animations that demonstrate the power of my techniques to obtain impasto style portraiture during animation under different lighting conditions

    Perception and recognition of computer-enhanced facial attributes and abstracted prototypes

    Get PDF
    The influence of the human facial image was surveyed and the nature of its many interpretations were examined. The role of distinctiveness was considered particularly relevant as it accounted for many of the impressions of character and identity ascribed to individuals. The notion of structural differences with respect to some selective essence of normality is especially important as it allows a wide range of complex facial types to be considered and understood in an objective manner. A software tool was developed which permitted the manipulation of facial images. Quantitative distortions of digital images were examined using perceptual and recognition memory paradigms. Seven experiments investigated the role of distinctiveness in memory for faces using synthesised caricatures. The results showed that caricatures, both photographic and line-drawing, improved recognition speed and accuracy, indicating that both veridical and distinctiveness information are coded for familiar faces in long-term memory. The impact of feature metrics on perceptual estimates of facial age was examined using 'age-caricatured' images and were found to be in relative accordance with the 'intended' computed age. Further modifying the semantics permitted the differences between individual faces to be visualised in terms of facial structure and skin texture patterns. Transformations of identity between two, or more, faces established the necessary matrices which can offer an understanding of facial expression in a categorical manner and the inherent interactions. A procedural extension allowed generation of composite images in which all features are perfectly aligned. Prototypical facial types specified in this manner enabled high-level manipulations to be made of gender and attractiveness; two experiments corroborated previously speculative material and thus gave credence to the prototype model. In summary, psychological assessment of computer-manipulated facial images demonstrated the validity of the objective techniques and highlighted particular parameters which contribute to our perception and recognition of the individual and of underlying facial types

    Three-dimensional morphanalysis of the face.

    Get PDF
    The aim of the work reported in this thesis was to determine the extent to which orthogonal two-dimensional morphanalytic (universally relatable) craniofacial imaging methods can be extended into the realm of computer-based three-dimensional imaging. New methods are presented for capturing universally relatable laser-video surface data, for inter-relating facial surface scans and for constructing probabilistic facial averages. Universally relatable surface scans are captured using the fixed relations principle com- bined with a new laser-video scanner calibration method. Inter- subject comparison of facial surface scans is achieved using inter- active feature labelling and warping methods. These methods have been extended to groups of subjects to allow the construction of three-dimensional probabilistic facial averages. The potential of universally relatable facial surface data for applications such as growth studies and patient assessment is demonstrated. In addition, new methods for scattered data interpolation, for controlling overlap in image warping and a fast, high-resolution method for simulating craniofacial surgery are described. The results demonstrate that it is not only possible to extend universally relatable imaging into three dimensions, but that the extension also enhances the established methods, providing a wide range of new applications

    Facial Makeup Detection Using HSV Color Space and Texture Analysis

    Get PDF
    Facial Makeup Detection Using HSV Color Space and Texture Analysis In recent decades, 2D and 3D face analyses in digital systems have become increasingly important because of their vast applications in security systems or any digital systems that interact with humans. In fact the human face expresses many of the individual’s characteristics such as gender, ethnicity, emotion, age, beauty and health. Makeup is one of the common techniques used by people to alter the appearance of their faces. Analyzing face beauty by computer is essential to aestheticians and computer scientists. The objective of this research is to detect makeup on images of human faces by image processing and pattern recognition techniques. Detecting changes of face, caused by cosmetics such as eye-shadow, lipstick and liquid foundation, are the targets of this study. Having a proper facial database that consists of the information related to makeup is necessary. Collecting the first facial makeup database was a valuable achievement for this research. This database consists of almost 1290 frontal pictures from 21 individuals before and after makeup. Along with the images, meta data such as ethnicity, country of origin, smoking habits, drinking habits, age, and job is provided. The uniqueness of this database stems from, first being the only database that has images of women both before and after makeup, and second because of having light-source from different angles as well as its meta data collected during the process. Selecting the best features that lead to the best classification result is a challenging issue, since any variation in the head pose, lighting conditions and face orientation can add complexity to a proper evaluation of whether any makeup has been applied or not. In addition, the similarity of cosmetic’s color to the skin color adds another level of difficulty. In this effort, by choosing the best possible features, related to edge information, color specification and texture characteristics this problem was addressed. Because hue and saturation and intensity can be studied separately in HSV (Hue, Saturation, and Value) color space, it is selected for this application. The proposed technique is tested on 120 selected images from our new database. A supervised learning model called SVM (Support Vector Machine) classifier is used and the accuracy obtained is 90.62% for eye-shadow detection, 93.33% for lip-stick and 52.5% for liquid foundation detection respectively. A main highlight of this technique is to specify where makeup has been applied on the face, which can be used to identify the proper makeup style for the individual. This application will be a great improvement in the aesthetic field, through which aestheticians can facilitate their work by identifying the type of makeup appropriate for each person and giving the proper suggestions to the person involved by reducing the number of trials
    • …
    corecore