9,453 research outputs found
GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians
Hairstyle reflects culture and ethnicity at first glance. In the digital era,
various realistic human hairstyles are also critical to high-fidelity digital
human assets for beauty and inclusivity. Yet, realistic hair modeling and
real-time rendering for animation is a formidable challenge due to its sheer
number of strands, complicated structures of geometry, and sophisticated
interaction with light. This paper presents GaussianHair, a novel explicit hair
representation. It enables comprehensive modeling of hair geometry and
appearance from images, fostering innovative illumination effects and dynamic
animation capabilities. At the heart of GaussianHair is the novel concept of
representing each hair strand as a sequence of connected cylindrical 3D
Gaussian primitives. This approach not only retains the hair's geometric
structure and appearance but also allows for efficient rasterization onto a 2D
image plane, facilitating differentiable volumetric rendering. We further
enhance this model with the "GaussianHair Scattering Model", adept at
recreating the slender structure of hair strands and accurately capturing their
local diffuse color in uniform lighting. Through extensive experiments, we
substantiate that GaussianHair achieves breakthroughs in both geometric and
appearance fidelity, transcending the limitations encountered in
state-of-the-art methods for hair reconstruction. Beyond representation,
GaussianHair extends to support editing, relighting, and dynamic rendering of
hair, offering seamless integration with conventional CG pipeline workflows.
Complementing these advancements, we have compiled an extensive dataset of real
human hair, each with meticulously detailed strand geometry, to propel further
research in this field
Interactive Virtual Hair Salon
Abstract User interaction with animated hair is desirable for various applications but difficult because it requires real-time animation and rendering of hair. Hair modeling, in cluding styling, simulation, and rendering, is computationally challenging due to the enormous number of deformable hair strands on a human head, elevating the computational complexity of many essential steps, such as collision detection and self-shadowing for hair. Using simulation localization techniques, multi-resolution representations, and graphics hardware rendering acceleration, we have developed a physically-based virtual hair salon system that simulates and renders hair at accelerated rates, enabling users to interactively style virtual hair. With a 3D haptic interface, users can directly manipulate and position hair strands, as well as employ real-world styling applications (cutting, blow-drying, etc.) to create hairstyles more intuitively than previous techniques
Adaptive Wisp Tree - a multiresolution control structure for simulating dynamic clustering in hair motion
International audienceRealistic animation of long human hair is difficult due to the number of hair strands and to the complexity of their interactions. Existing methods remain limited to smooth, uniform, and relatively simple hair motion. We present a powerful adaptive approach to modeling dynamic clustering behavior that characterizes complex long-hair motion. The Adaptive Wisp Tree (AWT) is a novel control structure that approximates the large-scale coherent motion of hair clusters as well as small-scaled variation of individual hair strands. The AWT also aids computation efficiency by identifying regions where visible hair motions are likely to occur. The AWT is coupled with a multiresolution geometry used to define the initial hair model. This combined system produces stable animations that exhibit the natural effects of clustering and mutual hair interaction. Our results show that the method is applicable to a wide variety of hair styles
HeadOn: Real-time Reenactment of Human Portrait Videos
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'1
Modelling Rod-like Flexible Biological Tissues for Medical Training
This paper outlines a framework for the modelling of slender rod-like biological tissue structures in both global and local scales. Volumetric discretization of a rod-like structure is expensive in computation and therefore
is not ideal for applications where real-time performance is essential. In our approach, the Cosserat rod model is introduced to capture the global shape changes, which models the structure as a one-dimensional entity, while the
local deformation is handled separately. In this way a good balance in accuracy and efficiency is achieved. These advantages make our method appropriate for
the modelling of soft tissues for medical training applications
Recommended from our members
Modelling 3D product visualization on the online retailer
-Purpose: An emerging body of research has investigated telepresence and presence notions in online retailers’ websites during the past two decades. Since that time considerable research has been published in different fields to explain the meanings and applications of these notions. This study aims to investigate the antecedents and consequences of 3D product simulation telepresence and the effects of the consequences on consumers’ behavioural intentions on the online retailer Website.
-Design/methodology/approach: this study developed a retailer Website in which a variety of laptops are presented by using 3D product visualizations. This research used a within-subjects design and employed two laboratory experiments. In the first experiment, a two-way repeated measure ANOVA was conducted to determine the effects of the manipulated conditions on the dependent variable (i.e., 3D telepresence). Finally, we used Amos 16 to test the overall goodness of fit of the proposed conceptual model.
-Originality/values: To the best of the authors’ knowledge, this research is the first in the UK that used a UK sample to investigate the effects of using 3D product visualization in an electrical industry (i.e., laptops) on consumers’ experiences. Secondly, this paper merged constructs from the human-computer-interaction (HCI) field (i.e., control, vividness and telepresence) to the proposed model. Moreover, the way this paper defines interactivity and telepresence adds value to this study. Thirdly, we developed new scales to measure telepresence and control constructs to suit consumers’ experience in the online retailer context. Finally, the design of this study is original in using a website that contains 3D product visualization with both utilitarian and hedonic values.
-Findings: The manipulation checks showed that high control and animation provides most effective representation of telepresence. The overall goodness of fit of the conceptual model met the standards and showed that all the hypothesized paths were valid
Learning to Reconstruct People in Clothing from a Single RGB Camera
We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach
- …