108 research outputs found
Neural Wireframe Renderer: Learning Wireframe to Image Translations
In architecture and computer-aided design, wireframes (i.e., line-based
models) are widely used as basic 3D models for design evaluation and fast
design iterations. However, unlike a full design file, a wireframe model lacks
critical information, such as detailed shape, texture, and materials, needed by
a conventional renderer to produce 2D renderings of the objects or scenes. In
this paper, we bridge the information gap by generating photo-realistic
rendering of indoor scenes from wireframe models in an image translation
framework. While existing image synthesis methods can generate visually
pleasing images for common objects such as faces and birds, these methods do
not explicitly model and preserve essential structural constraints in a
wireframe model, such as junctions, parallel lines, and planar surfaces. To
this end, we propose a novel model based on a structure-appearance joint
representation learned from both images and wireframes. In our model,
structural constraints are explicitly enforced by learning a joint
representation in a shared encoder network that must support the generation of
both images and wireframes. Experiments on a wireframe-scene dataset show that
our wireframe-to-image translation model significantly outperforms the
state-of-the-art methods in both visual quality and structural integrity of
generated images.Comment: ECCV 202
Learning to Construct 3D Building Wireframes from 3D Line Clouds
Line clouds, though under-investigated in the previous work, potentially
encode more compact structural information of buildings than point clouds
extracted from multi-view images. In this work, we propose the first network to
process line clouds for building wireframe abstraction. The network takes a
line cloud as input , i.e., a nonstructural and unordered set of 3D line
segments extracted from multi-view images, and outputs a 3D wireframe of the
underlying building, which consists of a sparse set of 3D junctions connected
by line segments. We observe that a line patch, i.e., a group of neighboring
line segments, encodes sufficient contour information to predict the existence
and even the 3D position of a potential junction, as well as the likelihood of
connectivity between two query junctions. We therefore introduce a two-layer
Line-Patch Transformer to extract junctions and connectivities from sampled
line patches to form a 3D building wireframe model. We also introduce a
synthetic dataset of multi-view images with ground-truth 3D wireframe. We
extensively justify that our reconstructed 3D wireframe models significantly
improve upon multiple baseline building reconstruction methods. The code and
data can be found at https://github.com/Luo1Cheng/LC2WF.Comment: 10 pages, 6 figure
- …