122 research outputs found

    Real-time content-aware texturing for deformable surfaces

    Get PDF
    Animation of models often introduces distortions to their parameterisation, as these are typically optimised for a single frame. The net effect is that under deformation, the mapped features, i.e. UV texture maps, bump maps or displacement maps, may appear to stretch or scale in an undesirable way. Ideally, what we would like is for the appearance of such features to remain feasible given any underlying deformation. In this paper we introduce a real-time technique that reduces such distortions based on a distortion control (rigidity) map. In two versions of our proposed technique, the parameter space is warped in either an axis or a non-axis aligned manner based on the minimisation of a non-linear distortion metric. This in turn is solved using a highly optimised hybrid CPU-GPU strategy. The result is real-time dynamic content-aware texturing that reduces distortions in a controlled way. The technique can be applied to reduce distortions in a variety of scenarios, including reusing a low geometric complexity animated sequence with a multitude of detail maps, dynamic procedurally defined features mapped on deformable geometry and animation authoring previews on texture-mapped models. © 2013 ACM

    Interactive Shadow Removal and Ground Truth for Variable Scene Categories

    Get PDF
    We present an interactive, robust and high quality method for fast shadow removal. To perform detection we use an on-the-fly learning approach guided by two rough user inputs for the pixels of the shadow and the lit area. From this we derive a fusion image that magnifies shadow boundary intensity change due to illumination variation. After detection, we perform shadow removal by registering the penumbra to a normalised frame which allows us to efficiently estimate non-uniform shadow illumination changes, resulting in accurate and robust removal. We also present the first reliable, validated and multi-scene category ground truth for shadow removal algorithms which overcomes limitations in existing data sets -- such as inconsistencies between shadow and shadow-free images and limited variations of shadows. Using our data, we perform the most thorough comparison of state of the art shadow removal methods to date. Our algorithm outperforms the state of the art, and we supply our P-code and evaluation data and scripts to encourage future open comparisons

    Genetic algorithms reveal profound individual differences in emotion recognition.

    Get PDF
    Emotional communication relies on a mutual understanding, between expresser and viewer, of facial configurations that broadcast specific emotions. However, we do not know whether people share a common understanding of how emotional states map onto facial expressions. This is because expressions exist in a high-dimensional space too large to explore in conventional experimental paradigms. Here, we address this by adapting genetic algorithms and combining them with photorealistic three-dimensional avatars to efficiently explore the high-dimensional expression space. A total of 336 people used these tools to generate facial expressions that represent happiness, fear, sadness, and anger. We found substantial variability in the expressions generated via our procedure, suggesting that different people associate different facial expressions to the same emotional state. We then examined whether variability in the facial expressions created could account for differences in performance on standard emotion recognition tasks by asking people to categorize different test expressions. We found that emotion categorization performance was explained by the extent to which test expressions matched the expressions generated by each individual. Our findings reveal the breadth of variability in people's representations of facial emotions, even among typical adult populations. This has profound implications for the interpretation of responses to emotional stimuli, which may reflect individual differences in the emotional category people attribute to a particular facial expression, rather than differences in the brain mechanisms that produce emotional responses

    A FACS Valid 3D Dynamic Action Unit Database with Applications to 3D Dynamic Morphable Facial Modeling

    Get PDF
    This paper presents the first dynamic 3D FACS data set for facial expression research, containing 10 subjects performing between 19 and 97 different AUs both individually and in combination. In total the corpus contains 519 AU sequences. The peak expression frame of each sequence has been manually FACS coded by certified FACS experts. This provides a ground truth for 3D FACS based AU recognition systems. In order to use this data, we describe the first framework for building dynamic 3D morphable models. This includes a novel Active Appearance Model (AAM) based 3D facial registration and mesh correspondence scheme. The approach overcomes limitations in existing methods that require facial markers or are prone to optical flow drift. We provide the first quantitative assessment of such 3D facial mesh registration techniques and show how our proposed method provides more reliable correspondence

    Genetic algorithms reveal identity independent representation of emotional expressions.

    Get PDF
    People readily and automatically process facial emotion and identity, and it has been reported that these cues are processed both dependently and independently. However, this question of identity independent encoding of emotions has only been examined using posed, often exaggerated expressions of emotion, that do not account for the substantial individual differences in emotion recognition. In this study, we ask whether people's unique beliefs of how emotions should be reflected in facial expressions depend on the identity of the face. To do this, we employed a genetic algorithm where participants created facial expressions to represent different emotions. Participants generated facial expressions of anger, fear, happiness, and sadness, on two different identities. Facial features were controlled by manipulating a set of weights, allowing us to probe the exact positions of faces in high-dimensional expression space. We found that participants created facial expressions belonging to each identity in a similar space that was unique to the participant, for angry, fearful, and happy expressions, but not sad. However, using a machine learning algorithm that examined the positions of faces in expression space, we also found systematic differences between the two identities' expressions across participants. This suggests that participants' beliefs of how an emotion should be reflected in a facial expression are unique to them and identity independent, although there are also some systematic differences in the facial expressions between two identities that are common across all individuals. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

    Expression perceptive fields explain individual differences in the recognition of facial emotions

    Get PDF
    Abstract Humans can use the facial expressions of another to infer their emotional state, although it remains unknown how this process occurs. Here we suppose the presence of perceptive fields within expression space, analogous to feature-tuned receptive-fields of early visual cortex. We developed genetic algorithms to explore a multidimensional space of possible expressions and identify those that individuals associated with different emotions. We next defined perceptive fields as probabilistic maps within expression space, and found that they could predict the emotions that individuals infer from expressions presented in a separate task. We found profound individual variability in their size, location, and specificity, and that individuals with more similar perceptive fields had similar interpretations of the emotion communicated by an expression, providing possible channels for social communication. Modelling perceptive fields therefore provides a predictive framework in which to understand how individuals infer emotions from facial expressions

    CONSTRUCTION AND PERCEPTUAL EVALUATION OF A 3D HEAD MODEL

    Get PDF
    Abstract This paper presents a method to construct a compact 3D head model capable of synthesizing realistic face expressions with subtle details such as wrinkles and muscle folds. The model is assessed by Psychologists using the certified FACS coding method. Such a compact and accurate model offers a large market potential not only in Computer Graphics industries but also in low-bandwidth applications e.g. tele-conferencing, and provides a valuable novel tool for Perceptual Studies. Method and Implementation The method used to construct the 3D head model in this work is inspired from the 2D Active Appearance Model described in Besides, a synthesized face looks more authentic if not only it appears like a human, but also moves like a human. Therefore, it is very important to accurately model the dynamics of the facial expressions. Not many researches have achieved this task so far in 3D animation, which is mostly due to the limitations of their data capture equipments. In this research, we use a fast 3D video camera (48fps) to capture our training data, which allows to model a fine temporal dynamic of the face movements. Finally, we combine the method described above with FACS coding to further improve the precision of our head model. FACS is a certified method used in Psychology to study facial movements Results Our training data consists of short video sequences of Action Units (about 60 frames each). After building a joint PCA model of shape and texture, we obtain a set of Eigenvectors which represent the different modes of variations of the facial changes. Conclusion We have successfully built a 3D head model capable of synthesizing realistic-looking face expressions, reproducing accurate skin folds and expression dynamics. We plan to use this model to study and model facial idiosyncrasies

    COVID-19 and anatomy: Stimulus and initial response.

    Get PDF
    The outbreak of COVID-19, resulting from widespread transmission of the SARS-CoV-2 virus, represents one of the foremost current challenges to societies across the globe, with few areas of life remaining untouched. Here, we detail the immediate impact that COVID-19 has had on the teaching and practice of anatomy, providing specific examples of the varied responses from several UK, Irish and German universities and medical schools. Alongside significant issues for, and suspension of, body donation programmes, the widespread closure of university campuses has led to challenges in delivering anatomy education via online methods, a particular problem for a practical, experience-based subject such as anatomy. We discuss the short-term consequences of COVID-19 for body donation programmes and anatomical education, and highlight issues and challenges that will need to be addressed in the medium to long term in order to restore anatomy education and practice throughout the world

    Liaison Old Age Psychiatry Service in a Medical Setting: Description of the Newcastle Clinical Service

    Get PDF
    Liaison Old Age Psychiatry services (LOAP) have begun to emerge in the UK and further development of the service is supported by the latest health policies. Since qualitative and quantitative studies in this area are lacking, we have undertaken a detailed quantitative prospective review of referrals to the Newcastle LOAP to evaluate the clinical activity of the service. We report high referral rates and turnover for the LOAP service. Reasons for referral are diverse, ranging from requests for level of care and capacity assessments and transfer to other clinical services to management of behaviour, diagnosis, and treatment. We outline the value of a multidisciplinary model of LOAP activity, including the important role of the liaison nursing team, in providing a rapid response, screening, and followup of high number of clinical referrals to the service
    corecore