We propose a method to generate 3D shapes using point clouds. Given a
point-cloud representation of a 3D shape, our method builds a kd-tree to
spatially partition the points. This orders them consistently across all
shapes, resulting in reasonably good correspondences across all shapes. We then
use PCA analysis to derive a linear shape basis across the spatially
partitioned points, and optimize the point ordering by iteratively minimizing
the PCA reconstruction error. Even with the spatial sorting, the point clouds
are inherently noisy and the resulting distribution over the shape coefficients
can be highly multi-modal. We propose to use the expressive power of neural
networks to learn a distribution over the shape coefficients in a
generative-adversarial framework. Compared to 3D shape generative models
trained on voxel-representations, our point-based method is considerably more
light-weight and scalable, with little loss of quality. It also outperforms
simpler linear factor models such as Probabilistic PCA, both qualitatively and
quantitatively, on a number of categories from the ShapeNet dataset.
Furthermore, our method can easily incorporate other point attributes such as
normal and color information, an additional advantage over voxel-based
representations.Comment: To appear at BMVC 201