1,178 research outputs found
Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing
Despite recent progress in developing animatable full-body avatars, realistic
modeling of clothing - one of the core aspects of human self-expression -
remains an open challenge. State-of-the-art physical simulation methods can
generate realistically behaving clothing geometry at interactive rates.
Modeling photorealistic appearance, however, usually requires physically-based
rendering which is too expensive for interactive applications. On the other
hand, data-driven deep appearance models are capable of efficiently producing
realistic appearance, but struggle at synthesizing geometry of highly dynamic
clothing and handling challenging body-clothing configurations. To this end, we
introduce pose-driven avatars with explicit modeling of clothing that exhibit
both photorealistic appearance learned from real-world data and realistic
clothing dynamics. The key idea is to introduce a neural clothing appearance
model that operates on top of explicit geometry: at training time we use
high-fidelity tracking, whereas at animation time we rely on physically
simulated geometry. Our core contribution is a physically-inspired appearance
network, capable of generating photorealistic appearance with view-dependent
and dynamic shadowing effects even for unseen body-clothing configurations. We
conduct a thorough evaluation of our model and demonstrate diverse animation
results on several subjects and different types of clothing. Unlike previous
work on photorealistic full-body avatars, our approach can produce much richer
dynamics and more realistic deformations even for many examples of loose
clothing. We also demonstrate that our formulation naturally allows clothing to
be used with avatars of different people while staying fully animatable, thus
enabling, for the first time, photorealistic avatars with novel clothing.Comment: SIGGRAPH Asia 2022 (ACM ToG) camera ready. The supplementary video
can be found on
https://research.facebook.com/publications/dressing-avatars-deep-photorealistic-appearance-for-physically-simulated-clothing
Enabling Social Virtual Reality Experiences Using Pass-Through Video
Appropriately segmented portions from a video stream are captured by an outward-facing camera mounted on a virtual reality (VR) device and inserted into a VR environment. The outward-facing camera can be the onboard camera on the VR device or it can be a separate camera mounted on the VR device. Because the camera view from the point of view of the VR user is similar to a full three-dimensional (3D) model of the user’s environment, rendered from the user’s location, the video stream can be inserted directly into the VR environment by segmenting out the relevant pixels and placing them into the VR environment as a 3D object. In this way, a high-quality VR experience, including desired aspects of the user’s physical and social environment, can be provided in most settings without expensive 3D modeling or avatar generation
Influence of Narrative Elements on User Behaviour in Photorealistic Social VR
Social Virtual Reality (VR) applications are becoming the next big
revolution in the field of remote communication. Social VR provides
the possibility for participants to explore and interact with a virtual
environments and objects, feelings of a full sense of immersion, and
being together. Understanding how user behaviour is influenced
by the shared virtual space and its elements becomes the key to
design and optimize novel immersive experiences that take into
account the interaction between users and virtual objects. This
paper presents a behavioural analysis of user navigation trajectories in a 6 degrees of freedom, social VR movie. We analysed 48
user trajectories from a photorealistic telepresence experiment, in
which subjects experience watching a crime movie together in VR.
We investigate how users are affected by salient agents (i.e., virtual characters) and by the narrative elements of the VR movie
(i.e., dialogues versus interactive part). We complete our assessment
by conducting a statistical analysis on the collected data. Results
indicate that user behaviour is affected by different narrative and
interactive elements. We present our observations, and we draw
conclusions on future paths for social VR experiences
LiveHand: Real-time and Photorealistic Neural Hand Rendering
The human hand is the main medium through which we interact with our
surroundings. Hence, its digitization is of uttermost importance, with direct
applications in VR/AR, gaming, and media production amongst other areas. While
there are several works for modeling the geometry and articulations of hands,
little attention has been dedicated to capturing photo-realistic appearance. In
addition, for applications in extended reality and gaming, real-time rendering
is critical. In this work, we present the first neural-implicit approach to
photo-realistically render hands in real-time. This is a challenging problem as
hands are textured and undergo strong articulations with various pose-dependent
effects. However, we show that this can be achieved through our carefully
designed method. This includes training on a low-resolution rendering of a
neural radiance field, together with a 3D-consistent super-resolution module
and mesh-guided space canonicalization and sampling. In addition, we show the
novel application of a perceptual loss on the image space is critical for
achieving photorealism. We show rendering results for several identities, and
demonstrate that our method captures pose- and view-dependent appearance
effects. We also show a live demo of our method where we photo-realistically
render the human hand in real-time for the first time in literature. We ablate
all our design choices and show that our design optimizes for both photorealism
and rendering speed. Our code will be released to encourage further research in
this area.Comment: 11 pages, 8 figure
- …