1,064 research outputs found

    3D Human Face Reconstruction and 2D Appearance Synthesis

    Get PDF
    3D human face reconstruction has been an extensive research for decades due to its wide applications, such as animation, recognition and 3D-driven appearance synthesis. Although commodity depth sensors are widely available in recent years, image based face reconstruction are significantly valuable as images are much easier to access and store. In this dissertation, we first propose three image-based face reconstruction approaches according to different assumption of inputs. In the first approach, face geometry is extracted from multiple key frames of a video sequence with different head poses. The camera should be calibrated under this assumption. As the first approach is limited to videos, we propose the second approach then focus on single image. This approach also improves the geometry by adding fine grains using shading cue. We proposed a novel albedo estimation and linear optimization algorithm in this approach. In the third approach, we further loose the constraint of the input image to arbitrary in the wild images. Our proposed approach can robustly reconstruct high quality model even with extreme expressions and large poses. We then explore the applicability of our face reconstructions on four interesting applications: video face beautification, generating personalized facial blendshape from image sequences, face video stylizing and video face replacement. We demonstrate great potentials of our reconstruction approaches on these real-world applications. In particular, with the recent surge of interests in VR/AR, it is increasingly common to see people wearing head-mounted displays. However, the large occlusion on face is a big obstacle for people to communicate in a face-to-face manner. Our another application is that we explore hardware/software solutions for synthesizing the face image with presence of HMDs. We design two setups (experimental and mobile) which integrate two near IR cameras and one color camera to solve this problem. With our algorithm and prototype, we can achieve photo-realistic results. We further propose a deep neutral network to solve the HMD removal problem considering it as a face inpainting problem. This approach doesn\u27t need special hardware and run in real-time with satisfying results

    The Bison, April 15, 2016

    Get PDF

    Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation

    Full text link
    Virtual facial avatars will play an increasingly important role in immersive communication, games and the metaverse, and it is therefore critical that they be inclusive. This requires accurate recovery of the appearance, represented by albedo, regardless of age, sex, or ethnicity. While significant progress has been made on estimating 3D facial geometry, albedo estimation has received less attention. The task is fundamentally ambiguous because the observed color is a function of albedo and lighting, both of which are unknown. We find that current methods are biased towards light skin tones due to (1) strongly biased priors that prefer lighter pigmentation and (2) algorithmic solutions that disregard the light/albedo ambiguity. To address this, we propose a new evaluation dataset (FAIR) and an algorithm (TRUST) to improve albedo estimation and, hence, fairness. Specifically, we create the first facial albedo evaluation benchmark where subjects are balanced in terms of skin color, and measure accuracy using the Individual Typology Angle (ITA) metric. We then address the light/albedo ambiguity by building on a key observation: the image of the full scene -- as opposed to a cropped image of the face -- contains important information about lighting that can be used for disambiguation. TRUST regresses facial albedo by conditioning both on the face region and a global illumination signal obtained from the scene image. Our experimental results show significant improvement compared to state-of-the-art methods on albedo estimation, both in terms of accuracy and fairness. The evaluation benchmark and code will be made available for research purposes at https://trust.is.tue.mpg.de.Comment: Camera-Ready version, accepted at ECCV202

    Data driven analysis of faces from images

    Get PDF
    This thesis proposes three new data-driven approaches to detect, analyze, or modify faces in images. All presented contributions are inspired by the use of prior knowledge and they derive information about facial appearances from pre-collected databases of images or 3D face models. First, we contribute an approach that extends a widely-used monocular face detector by an additional classifier that evaluates disparity maps of a passive stereo camera. The algorithm runs in real-time and significantly reduces the number of false positives compared to the monocular approach. Next, with a many-core implementation of the detector, we train view-dependent face detectors based on tailored views which guarantee that the statistical variability is fully covered. These detectors are superior to the state of the art on a challenging dataset and can be trained in an automated procedure. Finally, we contribute a model describing the relation of facial appearance and makeup. The approach extracts makeup from before/after images of faces and allows to modify faces in images. Applications such as machine-suggested makeup can improve perceived attractiveness as shown in a perceptual study. In summary, the presented methods help improve the outcome of face detection algorithms, ease and automate their training procedures and the modification of faces in images. Moreover, their data-driven nature enables new and powerful applications arising from the use of prior knowledge and statistical analyses.In der vorliegenden Arbeit werden drei neue, datengetriebene Methoden vorgestellt, die Gesichter in Abbildungen detektieren, analysieren oder modifizieren. Alle Algorithmen extrahieren dabei Vorwissen über Gesichter und deren Erscheinungsformen aus zuvor erstellten Gesichts- Datenbanken, in 2-D oder 3-D. Zunächst wird ein weit verbreiteter monokularer Gesichtsdetektions- Algorithmus um einen zweiten Klassifikator erweitert. In Echtzeit wertet dieser stereoskopische Tiefenkarten aus und führt so zu nachweislich weniger falsch detektierten Gesichtern. Anschließend wird der Basis-Algorithmus durch Parallelisierung verbessert und mit synthetisch generierten Bilddaten trainiert. Diese garantieren die volle Nutzung des verfügbaren Varianzspektrums. So erzeugte Detektoren übertreffen bisher präsentierte Detektoren auf einem schwierigen Datensatz und können automatisch erzeugt werden. Abschließend wird ein Datenmodell für Gesichts-Make-up vorgestellt. Dieses extrahiert Make-up aus Vorher/Nachher-Fotos und kann Gesichter in Abbildungen modifizieren. In einer Studie wird gezeigt, dass vom Computer empfohlenes Make-up die wahrgenommene Attraktivität von Gesichtern steigert. Zusammengefasst verbessern die gezeigten Methoden die Ergebnisse von Gesichtsdetektoren, erleichtern und automatisieren ihre Trainingsprozedur sowie die automatische Veränderung von Gesichtern in Abbildungen. Durch Extraktion von Vorwissen und statistische Datenanalyse entstehen zudem neuartige Anwendungsfelder

    Artificial Intelligence Applied to Facial Image Analysis and Feature Measurement

    Get PDF
    Beauty has always played an essential part in society, influencing both everyday human interactions and more significant aspects such as mate selection. The continued and expanding use of beauty products by women and, increasingly, men worldwide has prompted and motivated several companies to develop platforms that effectively integrate into the beauty and cosmetics sector. They attempt to improve the customer experience by combining data with personalisation. Global cosmetics spending is worth billions of dollars, and most of it is wasted on unsuitable or incompatible products. This enables artificial intelligence to alter the rules using computer vision and deep learning approaches, allowing customers to be completely satisfied. With the advanced feature extraction in deep learning, especially convolutional neural networks, automatic facial feature analysis from images for the sake of beauty and beautification has become an emerging subject of study. Scholars studying facial aesthetics have recently made breakthroughs in the areas of facial shape beautification and beauty prediction. In the cosmetics sector, a new line of recommendation system research has arisen. Users benefit from recommendation systems since these systems help them narrow down their options. This thesis has laid the groundwork for a recommendation system related to beautification purposes through hairstyle and eyelashes leveraging artificial intelligence techniques. One of the most potent descriptors for attribution of personality is facial attributes. Various types of facial attributes are extracted in this thesis, including geometrical, automatic and hand-crafted features. The extracted attributes provide rich information for the recommendation system to produce the final outcome. The coexistence of external effects on the faces, like makeup or retouching, could disguise facial features. This might result in degradation in the performance of facial feature extraction and subsequently in the recommendation system. Thus, three methods are further developed to detect the faces wearing the makeup before passing the images into the recommendation system. This would help to provide more reliable and accurate feature extraction and suggest more suitable recommendation results. This thesis also presents a method for segmenting the facial region with the goal of extending the developed recommendation system by incorporating a synthesised hairstyle virtually on the facial region, thereby harnessing the recommended hairstyle generated by our developed system. Hence, the work presented in this thesis shows the benefits of implementing computational intelligence methods in the beauty and cosmetics sector. It also demonstrates that computational intelligence techniques have redefined the notion of beauty and how the consumer communicates with these emerging intelligent facilities that bring solutions to our fingertips

    Olfaction scaffolds the developing human from neonate to adolescent and beyond

    Get PDF
    The impact of the olfactory sense is regularly apparent across development. The foetus is bathed in amniotic fluid that conveys the mother’s chemical ecology. Transnatal olfactory continuity between the odours of amniotic fluid and milk assists in the transition to nursing. At the same time, odours emanating from the mammary areas provoke appetitive responses in newborns. Odours experienced from the mother’s diet during breastfeeding, and from practices such as pre-mastication, may assist in the dietary transition at weaning. In parallel, infants are attracted to and recognise their mother’s odours; later, children are able to recognise other kin and peers based on their odours. Familiar odours, such as those of the mother, regulate the child’s emotions, and scaffold perception and learning through non-olfactory senses. During adolescence, individuals become more sensitive to some bodily odours, while the timing of adolescence itself has been speculated to draw from the chemical ecology of the family unit. Odours learnt early in life and within the family niche continue to influence preferences as mate choice becomes relevant. Olfaction thus appears significant in turning on, sustaining and, in cases when mother odour is altered, disturbing adaptive reciprocity between offspring and caregiver during the multiple transitions of development between birth and adolescence

    Physical Adversarial Attacks for Surveillance: A Survey

    Full text link
    Modern automated surveillance techniques are heavily reliant on deep learning methods. Despite the superior performance, these learning systems are inherently vulnerable to adversarial attacks - maliciously crafted inputs that are designed to mislead, or trick, models into making incorrect predictions. An adversary can physically change their appearance by wearing adversarial t-shirts, glasses, or hats or by specific behavior, to potentially avoid various forms of detection, tracking and recognition of surveillance systems; and obtain unauthorized access to secure properties and assets. This poses a severe threat to the security and safety of modern surveillance systems. This paper reviews recent attempts and findings in learning and designing physical adversarial attacks for surveillance applications. In particular, we propose a framework to analyze physical adversarial attacks and provide a comprehensive survey of physical adversarial attacks on four key surveillance tasks: detection, identification, tracking, and action recognition under this framework. Furthermore, we review and analyze strategies to defend against the physical adversarial attacks and the methods for evaluating the strengths of the defense. The insights in this paper present an important step in building resilience within surveillance systems to physical adversarial attacks
    • …
    corecore