3 research outputs found
Composite Shape Modeling via Latent Space Factorization
We present a novel neural network architecture, termed Decomposer-Composer,
for semantic structure-aware 3D shape modeling. Our method utilizes an
auto-encoder-based pipeline, and produces a novel factorized shape embedding
space, where the semantic structure of the shape collection translates into a
data-dependent sub-space factorization, and where shape composition and
decomposition become simple linear operations on the embedding coordinates. We
further propose to model shape assembly using an explicit learned part
deformation module, which utilizes a 3D spatial transformer network to perform
an in-network volumetric grid deformation, and which allows us to train the
whole system end-to-end. The resulting network allows us to perform part-level
shape manipulation, unattainable by existing approaches. Our extensive ablation
study, comparison to baseline methods and qualitative analysis demonstrate the
improved performance of the proposed method
Roof-GAN: Learning to Generate Roof Geometry and Relations for Residential Houses
This paper presents Roof-GAN, a novel generative adversarial network that
generates structured geometry of residential roof structures as a set of roof
primitives and their relationships. Given the number of primitives, the
generator produces a structured roof model as a graph, which consists of 1)
primitive geometry as raster images at each node, encoding facet segmentation
and angles; 2) inter-primitive colinear/coplanar relationships at each edge;
and 3) primitive geometry in a vector format at each node, generated by a novel
differentiable vectorizer while enforcing the relationships. The discriminator
is trained to assess the primitive raster geometry, the primitive
relationships, and the primitive vector geometry in a fully end-to-end
architecture. Qualitative and quantitative evaluations demonstrate the
effectiveness of our approach in generating diverse and realistic roof models
over the competing methods with a novel metric proposed in this paper for the
task of structured geometry generation. We will share our code and data
Learning Mesh Representations via Binary Space Partitioning Tree Networks
Polygonal meshes are ubiquitous, but have only played a relatively minor role
in the deep learning revolution. State-of-the-art neural generative models for
3D shapes learn implicit functions and generate meshes via expensive
iso-surfacing. We overcome these challenges by employing a classical spatial
data structure from computer graphics, Binary Space Partitioning (BSP), to
facilitate 3D learning. The core operation of BSP involves recursive
subdivision of 3D space to obtain convex sets. By exploiting this property, we
devise BSP-Net, a network that learns to represent a 3D shape via convex
decomposition without supervision. The network is trained to reconstruct a
shape using a set of convexes obtained from a BSP-tree built over a set of
planes, where the planes and convexes are both defined by learned network
weights. BSP-Net directly outputs polygonal meshes from the inferred convexes.
The generated meshes are watertight, compact (i.e., low-poly), and well suited
to represent sharp geometry. We show that the reconstruction quality by BSP-Net
is competitive with those from state-of-the-art methods while using much fewer
primitives. We also explore variations to BSP-Net including using a more
generic decoder for reconstruction, more general primitives than planes, as
well as training a generative model with variational auto-encoders. Code is
available at https://github.com/czq142857/BSP-NET-original.Comment: Accepted to TPAMI. This is the extended journal version of BSP-Net
(arXiv:1911.06971) from CVPR 202