54 research outputs found
Simple Methods to Represent Shapes with Sample Spheres
Representing complex shapes with simple primitives in high accuracy is
important for a variety of applications in computer graphics and geometry
processing. Existing solutions may produce suboptimal samples or are complex to
implement. We present methods to approximate given shapes with user-tunable
number of spheres to balance between accuracy and simplicity: touching
medial/scale-axis polar balls and k-means smallest enclosing circles. Our
methods are easy to implement, run efficiently, and can approach quality
similar to manual construction.Comment: SIGGRAPH Asia 2020 Technical Communication
Video Interpolation using Optical Flow and Laplacian Smoothness
Non-rigid video interpolation is a common computer vision task. In this paper
we present an optical flow approach which adopts a Laplacian Cotangent Mesh
constraint to enhance the local smoothness. Similar to Li et al., our approach
adopts a mesh to the image with a resolution up to one vertex per pixel and
uses angle constraints to ensure sensible local deformations between image
pairs. The Laplacian Mesh constraints are expressed wholly inside the optical
flow optimization, and can be applied in a straightforward manner to a wide
range of image tracking and registration problems. We evaluate our approach by
testing on several benchmark datasets, including the Middlebury and Garg et al.
datasets. In addition, we show application of our method for constructing 3D
Morphable Facial Models from dynamic 3D data
Drivable 3D Gaussian Avatars
We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable
model for human bodies rendered with Gaussian splats. Current photorealistic
drivable avatars require either accurate 3D registrations during training,
dense input images during testing, or both. The ones based on neural radiance
fields also tend to be prohibitively slow for telepresence applications. This
work uses the recently presented 3D Gaussian Splatting (3DGS) technique to
render realistic humans at real-time framerates, using dense calibrated
multi-view videos as input. To deform those primitives, we depart from the
commonly used point deformation method of linear blend skinning (LBS) and use a
classic volumetric deformation method: cage deformations. Given their smaller
size, we drive these deformations with joint angles and keypoints, which are
more suitable for communication applications. Our experiments on nine subjects
with varied body shapes, clothes, and motions obtain higher-quality results
than state-of-the-art methods when using the same training and test data.Comment: Website: https://zielon.github.io/d3ga
- …