2 research outputs found
Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video
Dynamic radiance fields have emerged as a promising approach for generating
novel views from a monocular video. However, previous methods enforce the
geometric consistency to dynamic radiance fields only between adjacent input
frames, making it difficult to represent the global scene geometry and
degenerates at the viewpoint that is spatio-temporally distant from the input
camera trajectory. To solve this problem, we introduce point-based dynamic
radiance fields (\textbf{Point-DynRF}), a novel framework where the global
geometric information and the volume rendering process are trained by neural
point clouds and dynamic radiance fields, respectively. Specifically, we
reconstruct neural point clouds directly from geometric proxies and optimize
both radiance fields and the geometric proxies using our proposed losses,
allowing them to complement each other. We validate the effectiveness of our
method with experiments on the NVIDIA Dynamic Scenes Dataset and several
causally captured monocular video clips.Comment: WACV202
GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians
Hairstyle reflects culture and ethnicity at first glance. In the digital era,
various realistic human hairstyles are also critical to high-fidelity digital
human assets for beauty and inclusivity. Yet, realistic hair modeling and
real-time rendering for animation is a formidable challenge due to its sheer
number of strands, complicated structures of geometry, and sophisticated
interaction with light. This paper presents GaussianHair, a novel explicit hair
representation. It enables comprehensive modeling of hair geometry and
appearance from images, fostering innovative illumination effects and dynamic
animation capabilities. At the heart of GaussianHair is the novel concept of
representing each hair strand as a sequence of connected cylindrical 3D
Gaussian primitives. This approach not only retains the hair's geometric
structure and appearance but also allows for efficient rasterization onto a 2D
image plane, facilitating differentiable volumetric rendering. We further
enhance this model with the "GaussianHair Scattering Model", adept at
recreating the slender structure of hair strands and accurately capturing their
local diffuse color in uniform lighting. Through extensive experiments, we
substantiate that GaussianHair achieves breakthroughs in both geometric and
appearance fidelity, transcending the limitations encountered in
state-of-the-art methods for hair reconstruction. Beyond representation,
GaussianHair extends to support editing, relighting, and dynamic rendering of
hair, offering seamless integration with conventional CG pipeline workflows.
Complementing these advancements, we have compiled an extensive dataset of real
human hair, each with meticulously detailed strand geometry, to propel further
research in this field