2,839 research outputs found
State of the Art on Neural Rendering
Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. This state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems
Cultural-based visual expression: Emotional analysis of human face via Peking Opera Painted Faces (POPF)
© 2015 The Author(s) Peking Opera as a branch of Chinese traditional cultures and arts has a very distinct colourful facial make-up for all actors in the stage performance. Such make-up is stylised in nonverbal symbolic semantics which all combined together to form the painted faces to describe and symbolise the background, the characteristic and the emotional status of specific roles. A study of Peking Opera Painted Faces (POPF) was taken as an example to see how information and meanings can be effectively expressed through the change of facial expressions based on the facial motion within natural and emotional aspects. The study found that POPF provides exaggerated features of facial motion through images, and the symbolic semantics of POPF provides a high-level expression of human facial information. The study has presented and proved a creative structure of information analysis and expression based on POPF to improve the understanding of human facial motion and emotion
Automated Face Recognition: Challenges and Solutions
Automated face recognition (AFR) aims to identify people in images or videos using pattern recognition techniques. Automated face recognition is widely used in applications ranging from social media to advanced authentication systems. Whilst techniques for face recognition are well established, the automatic recognition of faces captured by digital cameras in unconstrained, real‐world environment is still very challenging, since it involves important variations in both acquisition conditions as well as in facial expressions and in pose changes. Thus, this chapter introduces the topic of computer automated face recognition in light of the main challenges in that research field and the developed solutions and applications based on image processing and artificial intelligence methods
Structure-aware Editable Morphable Model for 3D Facial Detail Animation and Manipulation
Morphable models are essential for the statistical modeling of 3D faces.
Previous works on morphable models mostly focus on large-scale facial geometry
but ignore facial details. This paper augments morphable models in representing
facial details by learning a Structure-aware Editable Morphable Model (SEMM).
SEMM introduces a detail structure representation based on the distance field
of wrinkle lines, jointly modeled with detail displacements to establish better
correspondences and enable intuitive manipulation of wrinkle structure.
Besides, SEMM introduces two transformation modules to translate expression
blendshape weights and age values into changes in latent space, allowing
effective semantic detail editing while maintaining identity. Extensive
experiments demonstrate that the proposed model compactly represents facial
details, outperforms previous methods in expression animation qualitatively and
quantitatively, and achieves effective age editing and wrinkle line editing of
facial details. Code and model are available at
https://github.com/gerwang/facial-detail-manipulation.Comment: ECCV 202
LiveCap: Real-time Human Performance Capture from Monocular Video
We present the first real-time human performance capture approach that
reconstructs dense, space-time coherent deforming geometry of entire humans in
general everyday clothing from just a single RGB video. We propose a novel
two-stage analysis-by-synthesis optimization whose formulation and
implementation are designed for high performance. In the first stage, a skinned
template model is jointly fitted to background subtracted input video, 2D and
3D skeleton joint positions found using a deep neural network, and a set of
sparse facial landmark detections. In the second stage, dense non-rigid 3D
deformations of skin and even loose apparel are captured based on a novel
real-time capable algorithm for non-rigid tracking using dense photometric and
silhouette constraints. Our novel energy formulation leverages automatically
identified material regions on the template to model the differing non-rigid
deformation behavior of skin and apparel. The two resulting non-linear
optimization problems per-frame are solved with specially-tailored
data-parallel Gauss-Newton solvers. In order to achieve real-time performance
of over 25Hz, we design a pipelined parallel architecture using the CPU and two
commodity GPUs. Our method is the first real-time monocular approach for
full-body performance capture. Our method yields comparable accuracy with
off-line performance capture techniques, while being orders of magnitude
faster
Exploring the Affective Loop
Research in psychology and neurology shows that both body and mind are
involved when experiencing emotions (Damasio 1994, Davidson et al.
2003). People are also very physical when they try to communicate their
emotions. Somewhere in between beings consciously and unconsciously
aware of it ourselves, we produce both verbal and physical signs to make
other people understand how we feel. Simultaneously, this production of
signs involves us in a stronger personal experience of the emotions we
express.
Emotions are also communicated in the digital world, but there is little
focus on users' personal as well as physical experience of emotions in
the available digital media. In order to explore whether and how we can
expand existing media, we have designed, implemented and evaluated
/eMoto/, a mobile service for sending affective messages to others. With
eMoto, we explicitly aim to address both cognitive and physical
experiences of human emotions. Through combining affective gestures for
input with affective expressions that make use of colors, shapes and
animations for the background of messages, the interaction "pulls" the
user into an /affective loop/. In this thesis we define what we mean by
affective loop and present a user-centered design approach expressed
through four design principles inspired by previous work within Human
Computer Interaction (HCI) but adjusted to our purposes; /embodiment/
(Dourish 2001) as a means to address how people communicate emotions in
real life, /flow/ (Csikszentmihalyi 1990) to reach a state of
involvement that goes further than the current context, /ambiguity/ of
the designed expressions (Gaver et al. 2003) to allow for open-ended
interpretation by the end-users instead of simplistic, one-emotion
one-expression pairs and /natural but designed expressions/ to address
people's natural couplings between cognitively and physically
experienced emotions. We also present results from an end-user study of
eMoto that indicates that subjects got both physically and emotionally
involved in the interaction and that the designed "openness" and
ambiguity of the expressions, was appreciated and understood by our
subjects. Through the user study, we identified four potential design
problems that have to be tackled in order to achieve an affective loop
effect; the extent to which users' /feel in control/ of the interaction,
/harmony and coherence/ between cognitive and physical expressions/,/
/timing/ of expressions and feedback in a communicational setting, and
effects of users' /personality/ on their emotional expressions and
experiences of the interaction
- …