76,424 research outputs found
Polyhedral Surface: Self-supervised Point Cloud Reconstruction Based on Polyhedral Surface
Point cloud reconstruction from raw point cloud has been an important topic
in computer graphics for decades, especially due to its high demand in modeling
and rendering applications. An important way to solve this problem is
establishing a local geometry to fit the local curve. However, previous methods
build either a local plane or polynomial curve. Local plane brings the loss of
sharp feature and the boundary artefacts on open surface. Polynomial curve is
hard to combine with neural network due to the local coordinate consistent
problem. To address this, we propose a novel polyhedral surface to represent
local surface. This method provides more flexible to represent sharp feature
and surface boundary on open surface. It does not require any local coordinate
system, which is important when introducing neural networks. Specifically, we
use normals to construct the polyhedral surface, including both dihedral and
trihedral surfaces using 2 and 3 normals, respectively. Our method achieves
state-of-the-art results on three commonly used datasets (ShapeNetCore, ABC,
and ScanNet). Code will be released upon acceptance
3D freeform surfaces from planar sketches using neural networks
A novel intelligent approach into 3D freeform surface reconstruction from planar sketches is proposed. A multilayer perceptron (MLP) neural network is employed to induce 3D freeform surfaces from planar freehand curves. Planar curves were used to represent the boundaries of a freeform surface patch. The curves were varied iteratively and sampled to produce training data to train and test the neural network. The obtained results demonstrate that the network successfully learned the inverse-projection map and correctly inferred the respective surfaces from fresh curves
Neural networks based recognition of 3D freeform surface from 2D sketch
In this paper, the Back Propagation (BP) network and Radial Basis Function (RBF) neural network are employed to recognize and reconstruct 3D freeform surface from 2D freehand sketch. Some tests and comparison experiments have been made to evaluate the performance for the reconstruction of freeform surfaces of both networks using simulation data. The experimental results show that both BP and RBF based freeform surface reconstruction methods are feasible; and the RBF network performed better. The RBF average point error between the reconstructed 3D surface data and the desired 3D surface data is less than 0.05 over all our 75 test sample data
3D Face Reconstruction from Light Field Images: A Model-free Approach
Reconstructing 3D facial geometry from a single RGB image has recently
instigated wide research interest. However, it is still an ill-posed problem
and most methods rely on prior models hence undermining the accuracy of the
recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI)
obtained from light field cameras and learn CNN models that recover horizontal
and vertical 3D facial curves from the respective horizontal and vertical EPIs.
Our 3D face reconstruction network (FaceLFnet) comprises a densely connected
architecture to learn accurate 3D facial curves from low resolution EPIs. To
train the proposed FaceLFnets from scratch, we synthesize photo-realistic light
field images from 3D facial scans. The curve by curve 3D face estimation
approach allows the networks to learn from only 14K images of 80 identities,
which still comprises over 11 Million EPIs/curves. The estimated facial curves
are merged into a single pointcloud to which a surface is fitted to get the
final 3D face. Our method is model-free, requires only a few training samples
to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single
light field images under varying poses, expressions and lighting conditions.
Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces
reconstruction errors by over 20% compared to recent state of the art
- …