669 research outputs found
3D Human Face Reconstruction and 2D Appearance Synthesis
3D human face reconstruction has been an extensive research for decades due to its wide applications, such as animation, recognition and 3D-driven appearance synthesis. Although commodity depth sensors are widely available in recent years, image based face reconstruction are significantly valuable as images are much easier to access and store.
In this dissertation, we first propose three image-based face reconstruction approaches according to different assumption of inputs.
In the first approach, face geometry is extracted from multiple key frames of a video sequence with different head poses. The camera should be calibrated under this assumption.
As the first approach is limited to videos, we propose the second approach then focus on single image. This approach also improves the geometry by adding fine grains using shading cue. We proposed a novel albedo estimation and linear optimization algorithm in this approach.
In the third approach, we further loose the constraint of the input image to arbitrary in the wild images. Our proposed approach can robustly reconstruct high quality model even with extreme expressions and large poses.
We then explore the applicability of our face reconstructions on four interesting applications: video face beautification, generating personalized facial blendshape from image sequences, face video stylizing and video face replacement. We demonstrate great potentials of our reconstruction approaches on these real-world applications. In particular, with the recent surge of interests in VR/AR, it is increasingly common to see people wearing head-mounted displays. However, the large occlusion on face is a big obstacle for people to communicate in a face-to-face manner. Our another application is that we explore hardware/software solutions for synthesizing the face image with presence of HMDs. We design two setups (experimental and mobile) which integrate two near IR cameras and one color camera to solve this problem. With our algorithm and prototype, we can achieve photo-realistic results.
We further propose a deep neutral network to solve the HMD removal problem considering it as a face inpainting problem. This approach doesn\u27t need special hardware and run in real-time with satisfying results
Periocular in the Wild Embedding Learning with Cross-Modal Consistent Knowledge Distillation
Periocular biometric, or peripheral area of ocular, is a collaborative
alternative to face, especially if a face is occluded or masked. In practice,
sole periocular biometric captures least salient facial features, thereby
suffering from intra-class compactness and inter-class dispersion issues
particularly in the wild environment. To address these problems, we transfer
useful information from face to support periocular modality by means of
knowledge distillation (KD) for embedding learning. However, applying typical
KD techniques to heterogeneous modalities directly is suboptimal. We put
forward in this paper a deep face-to-periocular distillation networks, coined
as cross-modal consistent knowledge distillation (CM-CKD) henceforward. The
three key ingredients of CM-CKD are (1) shared-weight networks, (2) consistent
batch normalization, and (3) a bidirectional consistency distillation for face
and periocular through an effectual CKD loss. To be more specific, we leverage
face modality for periocular embedding learning, but only periocular images are
targeted for identification or verification tasks. Extensive experiments on six
constrained and unconstrained periocular datasets disclose that the
CM-CKD-learned periocular embeddings extend identification and verification
performance by 50% in terms of relative performance gain computed based upon
face and periocular baselines. The experiments also reveal that the
CM-CKD-learned periocular features enjoy better subject-wise cluster
separation, thereby refining the overall accuracy performance.Comment: 30 page
Joint optimization of manifold learning and sparse representations for face and gesture analysis
Face and gesture understanding algorithms are powerful enablers in intelligent vision systems for surveillance, security, entertainment, and smart spaces. In the future, complex networks of sensors and cameras may disperse directions to lost tourists, perform directory lookups in the office lobby, or contact the proper authorities in case of an emergency. To be effective, these systems will need to embrace human subtleties while interacting with people in their natural conditions. Computer vision and machine learning techniques have recently become adept at solving face and gesture tasks using posed datasets in controlled conditions. However, spontaneous human behavior under unconstrained conditions, or in the wild, is more complex and is subject to considerable variability from one person to the next. Uncontrolled conditions such as lighting, resolution, noise, occlusions, pose, and temporal variations complicate the matter further. This thesis advances the field of face and gesture analysis by introducing a new machine learning framework based upon dimensionality reduction and sparse representations that is shown to be robust in posed as well as natural conditions. Dimensionality reduction methods take complex objects, such as facial images, and attempt to learn lower dimensional representations embedded in the higher dimensional data. These alternate feature spaces are computationally more efficient and often more discriminative. The performance of various dimensionality reduction methods on geometric and appearance based facial attributes are studied leading to robust facial pose and expression recognition models. The parsimonious nature of sparse representations (SR) has successfully been exploited for the development of highly accurate classifiers for various applications. Despite the successes of SR techniques, large dictionaries and high dimensional data can make these classifiers computationally demanding. Further, sparse classifiers are subject to the adverse effects of a phenomenon known as coefficient contamination, where for example variations in pose may affect identity and expression recognition. This thesis analyzes the interaction between dimensionality reduction and sparse representations to present a unified sparse representation classification framework that addresses both issues of computational complexity and coefficient contamination. Semi-supervised dimensionality reduction is shown to mitigate the coefficient contamination problems associated with SR classifiers. The combination of semi-supervised dimensionality reduction with SR systems forms the cornerstone for a new face and gesture framework called Manifold based Sparse Representations (MSR). MSR is shown to deliver state-of-the-art facial understanding capabilities. To demonstrate the applicability of MSR to new domains, MSR is expanded to include temporal dynamics. The joint optimization of dimensionality reduction and SRs for classification purposes is a relatively new field. The combination of both concepts into a single objective function produce a relation that is neither convex, nor directly solvable. This thesis studies this problem to introduce a new jointly optimized framework. This framework, termed LGE-KSVD, utilizes variants of Linear extension of Graph Embedding (LGE) along with modified K-SVD dictionary learning to jointly learn the dimensionality reduction matrix, sparse representation dictionary, sparse coefficients, and sparsity-based classifier. By injecting LGE concepts directly into the K-SVD learning procedure, this research removes the support constraints K-SVD imparts on dictionary element discovery. Results are shown for facial recognition, facial expression recognition, human activity analysis, and with the addition of a concept called active difference signatures, delivers robust gesture recognition from Kinect or similar depth cameras
Generative RGB-D face completion for head-mounted display removal
Head-mounted displays (HMDs) are an essential display device for the observation of virtual reality (VR) environments. However, HMDs obstruct external capturing methods from recording the user's upper face. This severely impacts social VR applications, such as teleconferencing, which commonly rely on external RGB-D sensors to capture a volumetric representation of the user. In this paper, we introduce an HMD removal framework based on generative adversarial networks (GANs), capable of jointly filling in missing color and depth data in RGB-D face images. Our framework includes an RGB-based identity loss function for identity preservation and several components aimed at surface reproduction. Our results demonstrate that our framework is able to remove HMDs from synthetic RGB-D face images while preserving the subject's identity
A Survey on Computer Vision based Human Analysis in the COVID-19 Era
The emergence of COVID-19 has had a global and profound impact, not only on
society as a whole, but also on the lives of individuals. Various prevention
measures were introduced around the world to limit the transmission of the
disease, including face masks, mandates for social distancing and regular
disinfection in public spaces, and the use of screening applications. These
developments also triggered the need for novel and improved computer vision
techniques capable of (i) providing support to the prevention measures through
an automated analysis of visual data, on the one hand, and (ii) facilitating
normal operation of existing vision-based services, such as biometric
authentication schemes, on the other. Especially important here, are computer
vision techniques that focus on the analysis of people and faces in visual data
and have been affected the most by the partial occlusions introduced by the
mandates for facial masks. Such computer vision based human analysis techniques
include face and face-mask detection approaches, face recognition techniques,
crowd counting solutions, age and expression estimation procedures, models for
detecting face-hand interactions and many others, and have seen considerable
attention over recent years. The goal of this survey is to provide an
introduction to the problems induced by COVID-19 into such research and to
present a comprehensive review of the work done in the computer vision based
human analysis field. Particular attention is paid to the impact of facial
masks on the performance of various methods and recent solutions to mitigate
this problem. Additionally, a detailed review of existing datasets useful for
the development and evaluation of methods for COVID-19 related applications is
also provided. Finally, to help advance the field further, a discussion on the
main open challenges and future research direction is given.Comment: Submitted to Image and Vision Computing, 44 pages, 7 figure
- …