19,599 research outputs found
EMS: 3D Eyebrow Modeling from Single-view Images
Eyebrows play a critical role in facial expression and appearance. Although
the 3D digitization of faces is well explored, less attention has been drawn to
3D eyebrow modeling. In this work, we propose EMS, the first learning-based
framework for single-view 3D eyebrow reconstruction. Following the methods of
scalp hair reconstruction, we also represent the eyebrow as a set of fiber
curves and convert the reconstruction to fibers growing problem. Three modules
are then carefully designed: RootFinder firstly localizes the fiber root
positions which indicates where to grow; OriPredictor predicts an orientation
field in the 3D space to guide the growing of fibers; FiberEnder is designed to
determine when to stop the growth of each fiber. Our OriPredictor is directly
borrowing the method used in hair reconstruction. Considering the differences
between hair and eyebrows, both RootFinder and FiberEnder are newly proposed.
Specifically, to cope with the challenge that the root location is severely
occluded, we formulate root localization as a density map estimation task.
Given the predicted density map, a density-based clustering method is further
used for finding the roots. For each fiber, the growth starts from the root
point and moves step by step until the ending, where each step is defined as an
oriented line with a constant length according to the predicted orientation
field. To determine when to end, a pixel-aligned RNN architecture is designed
to form a binary classifier, which outputs stop or not for each growing step.
To support the training of all proposed networks, we build the first 3D
synthetic eyebrow dataset that contains 400 high-quality eyebrow models
manually created by artists. Extensive experiments have demonstrated the
effectiveness of the proposed EMS pipeline on a variety of different eyebrow
styles and lengths, ranging from short and sparse to long bushy eyebrows.Comment: To appear in SIGGRAPH Asia 2023 (Journal Track). 19 pages, 19
figures, 6 table
HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for Single-View 3D Hair Modeling
In this work, we tackle the challenging problem of learning-based single-view
3D hair modeling. Due to the great difficulty of collecting paired real image
and 3D hair data, using synthetic data to provide prior knowledge for real
domain becomes a leading solution. This unfortunately introduces the challenge
of domain gap. Due to the inherent difficulty of realistic hair rendering,
existing methods typically use orientation maps instead of hair images as input
to bridge the gap. We firmly think an intermediate representation is essential,
but we argue that orientation map using the dominant filtering-based methods is
sensitive to uncertain noise and far from a competent representation. Thus, we
first raise this issue up and propose a novel intermediate representation,
termed as HairStep, which consists of a strand map and a depth map. It is found
that HairStep not only provides sufficient information for accurate 3D hair
modeling, but also is feasible to be inferred from real images. Specifically,
we collect a dataset of 1,250 portrait images with two types of annotations. A
learning framework is further designed to transfer real images to the strand
map and depth map. It is noted that, an extra bonus of our new dataset is the
first quantitative metric for 3D hair modeling. Our experiments show that
HairStep narrows the domain gap between synthetic and real and achieves
state-of-the-art performance on single-view 3D hair reconstruction.Comment: CVPR 2023 Highlight, project page:
https://paulyzheng.github.io/research/hairstep
Automatic Animation of Hair Blowing in Still Portrait Photos
We propose a novel approach to animate human hair in a still portrait photo.
Existing work has largely studied the animation of fluid elements such as water
and fire. However, hair animation for a real image remains underexplored, which
is a challenging problem, due to the high complexity of hair structure and
dynamics. Considering the complexity of hair structure, we innovatively treat
hair wisp extraction as an instance segmentation problem, where a hair wisp is
referred to as an instance. With advanced instance segmentation networks, our
method extracts meaningful and natural hair wisps. Furthermore, we propose a
wisp-aware animation module that animates hair wisps with pleasing motions
without noticeable artifacts. The extensive experiments show the superiority of
our method. Our method provides the most pleasing and compelling viewing
experience in the qualitative experiments and outperforms state-of-the-art
still-image animation methods by a large margin in the quantitative evaluation.
Project url: \url{https://nevergiveu.github.io/AutomaticHairBlowing/}Comment: Accepted to ICCV 202
HairBrush for Immersive Data-Driven Hair Modeling
International audienceWhile hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair author-ing interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques
Recommended from our members
Multi-Scale Models to Simulate Interactions between Liquid and Thin Structures
In this dissertation, we introduce a framework for simulating the dynamics between liquid and thin structures, including the effects of buoyancy, drag, capillary cohesion, dripping, and diffusion. After introducing related works, Part I begins with a discussion on the interactions between Newtonian fluid and fabrics. In this discussion, we treat both the fluid and the fabrics as continuum media; thus, the physical model is built from mixture theory. In Part II, we discuss the interactions between Newtonian fluid and hairs. To have more detailed dynamics, we no longer treat the hairs as continuum media. Instead, we treat them as discrete Kirchhoff rods. To deal with the thin layer of liquid that clings to the hairs, we augment each hair strand with a height field representation, through which we introduce a new reduced-dimensional flow model to solve the motion of liquid along the longitudinal direction of each hair. In addition, we develop a faithful model for the hairs' cohesion induced by surface tension, where a penalty force is applied to simulate the collision and cohesion between hairs. To enable the discrete strands interact with continuum-based, shear-dependent liquid, in Part III, we develop models that account for the volume change of the liquid as it passes through strands and the momentum exchange between the strands and the liquid. Accordingly, we extend the reduced-dimensional flow model to simulate liquid with elastoviscoplastic behavior. Furthermore, we use a constraint-based model to replace the penalty-force model to handle contact, which enables an accurate simulation of the frictional and adhesive effects between wet strands. We also present a principled method to preserve the total momentum of a strand and its surface flow, as well as an analytic plastic flow approach for Herschel-Bulkley fluid that enables stable semi-implicit integration at larger time steps.
We demonstrate a wide range of effects, including the challenging animation scenarios involving splashing, wringing, and colliding of wet clothes, as well as flipping of hair, animals shaking, spinning roller brushes from car washes being dunked in water, and intricate hair coalescence effects. For complex liquids, we explore a series of challenging scenarios, including strands interacting with oil paint, mud, cream, melted chocolate, and pasta sauce
RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars
Synthesizing high-fidelity head avatars is a central problem for computer
vision and graphics. While head avatar synthesis algorithms have advanced
rapidly, the best ones still face great obstacles in real-world scenarios. One
of the vital causes is inadequate datasets -- 1) current public datasets can
only support researchers to explore high-fidelity head avatars in one or two
task directions; 2) these datasets usually contain digital head assets with
limited data volume, and narrow distribution over different attributes. In this
paper, we present RenderMe-360, a comprehensive 4D human head dataset to drive
advance in head avatar research. It contains massive data assets, with 243+
million complete head frames, and over 800k video sequences from 500 different
identities captured by synchronized multi-view cameras at 30 FPS. It is a
large-scale digital library for head avatars with three key attributes: 1) High
Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K
cameras in 360 degrees. 2) High Diversity: The collected subjects vary from
different ages, eras, ethnicities, and cultures, providing abundant materials
with distinctive styles in appearance and geometry. Moreover, each subject is
asked to perform various motions, such as expressions and head rotations, which
further extend the richness of assets. 3) Rich Annotations: we provide
annotations with different granularities: cameras' parameters, matting, scan,
2D/3D facial landmarks, FLAME fitting, and text description.
Based on the dataset, we build a comprehensive benchmark for head avatar
research, with 16 state-of-the-art methods performed on five main tasks: novel
view synthesis, novel expression synthesis, hair rendering, hair editing, and
talking head generation. Our experiments uncover the strengths and weaknesses
of current methods. RenderMe-360 opens the door for future exploration in head
avatars.Comment: Technical Report; Project Page: 36; Github Link:
https://github.com/RenderMe-360/RenderMe-36
DeepSketchHair: Deep Sketch-based 3D Hair Modeling
We present sketchhair, a deep learning based tool for interactive modeling of
3D hair from 2D sketches. Given a 3D bust model as reference, our sketching
system takes as input a user-drawn sketch (consisting of hair contour and a few
strokes indicating the hair growing direction within a hair region), and
automatically generates a 3D hair model, which matches the input sketch both
globally and locally. The key enablers of our system are two carefully designed
neural networks, namely, S2ONet, which converts an input sketch to a dense 2D
hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D
vector field. Our system also supports hair editing with additional sketches in
new views. This is enabled by another deep neural network, V2VNet, which
updates the 3D vector field with respect to the new sketches. All the three
networks are trained with synthetic data generated from a 3D hairstyle
database. We demonstrate the effectiveness and expressiveness of our tool using
a variety of hairstyles and also compare our method with prior art
Recommended from our members
Multicolour lineage tracing reveals clonal dynamics of squamous carcinoma evolution from initiation to metastasis.
Tumour cells are subjected to evolutionary selection pressures during progression from initiation to metastasis. We analysed the clonal evolution of squamous skin carcinomas induced by DMBA/TPA treatment using the K5CreER-Confetti mouse and stage-specific lineage tracing. We show that benign tumours are polyclonal, but only one population contains the Hras driver mutation. Thus, benign papillomas are monoclonal in origin but recruit neighbouring epithelial cells during growth. Papillomas that never progress to malignancy retain several distinct clones, whereas progression to carcinoma is associated with a clonal sweep. Newly generated clones within carcinomas demonstrate intratumoural invasion and clonal intermixing, often giving rise to metastases containing two or more distinct clones derived from the matched primary tumour. These data demonstrate that late-stage tumour progression and dissemination are governed by evolutionary selection pressures that operate at a multicellular level and, therefore, differ from the clonal events that drive initiation and the benign-malignant transition
Image-Based Approaches to Hair Modeling
Hair is a relevant characteristic of virtual characters, therefore the modeling of plausible facial hair and hairstyles is an essential step in the generation of computer generated (CG) avatars. However, the inherent geometric complexity of hair together with the huge number of filaments of an average human head make the task of modeling hairstyles a very challenging one. To date this is commonly a manual process which requires artist skills or very specialized and costly acquisition software. In this work we present an image-based approach to model facial hair (beard and eyebrows) and (head) hairstyles. Since facial hair is usually much shorter than the average head hair two different methods are resented, adapted to the characteristics of the hair to be modeled. Facial hair is modeled using data extracted from facial texture images and missing information is inferred by means of a database-driven prior model. Our hairstyle reconstruction technique employs images of the hair to be modeled taken with a thermal camera. The major advantage of our thermal image-based method over conventional image-based techniques lies on the fact that during data capture the hairstyle is "lit from the inside": the thermal camera captures heat irradiated by the head and actively re-emitted by the hair filaments almost isotropically. Following this approach we can avoid several issues of conventional image-based techniques, like shadowing or anisotropy in reflectance. The presented technique requires minimal user interaction and a simple acquisition setup. Several challenging examples demonstrate the potential of the proposed approach
- …