3,962 research outputs found
Semantically Informed Multiview Surface Refinement
We present a method to jointly refine the geometry and semantic segmentation
of 3D surface meshes. Our method alternates between updating the shape and the
semantic labels. In the geometry refinement step, the mesh is deformed with
variational energy minimization, such that it simultaneously maximizes
photo-consistency and the compatibility of the semantic segmentations across a
set of calibrated images. Label-specific shape priors account for interactions
between the geometry and the semantic labels in 3D. In the semantic
segmentation step, the labels on the mesh are updated with MRF inference, such
that they are compatible with the semantic segmentations in the input images.
Also, this step includes prior assumptions about the surface shape of different
semantic classes. The priors induce a tight coupling, where semantic
information influences the shape update and vice versa. Specifically, we
introduce priors that favor (i) adaptive smoothing, depending on the class
label; (ii) straightness of class boundaries; and (iii) semantic labels that
are consistent with the surface orientation. The novel mesh-based
reconstruction is evaluated in a series of experiments with real and synthetic
data. We compare both to state-of-the-art, voxel-based semantic 3D
reconstruction, and to purely geometric mesh refinement, and demonstrate that
the proposed scheme yields improved 3D geometry as well as an improved semantic
segmentation
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
Multi-View Stereo with Single-View Semantic Mesh Refinement
While 3D reconstruction is a well-established and widely explored research
topic, semantic 3D reconstruction has only recently witnessed an increasing
share of attention from the Computer Vision community. Semantic annotations
allow in fact to enforce strong class-dependent priors, as planarity for ground
and walls, which can be exploited to refine the reconstruction often resulting
in non-trivial performance improvements. State-of-the art methods propose
volumetric approaches to fuse RGB image data with semantic labels; even if
successful, they do not scale well and fail to output high resolution meshes.
In this paper we propose a novel method to refine both the geometry and the
semantic labeling of a given mesh. We refine the mesh geometry by applying a
variational method that optimizes a composite energy made of a state-of-the-art
pairwise photo-metric term and a single-view term that models the semantic
consistency between the labels of the 3D mesh and those of the segmented
images. We also update the semantic labeling through a novel Markov Random
Field (MRF) formulation that, together with the classical data and smoothness
terms, takes into account class-specific priors estimated directly from the
annotated mesh. This is in contrast to state-of-the-art methods that are
typically based on handcrafted or learned priors. We are the first, jointly
with the very recent and seminal work of [M. Blaha et al arXiv:1706.08336,
2017], to propose the use of semantics inside a mesh refinement framework.
Differently from [M. Blaha et al arXiv:1706.08336, 2017], which adopts a more
classical pairwise comparison to estimate the flow of the mesh, we apply a
single-view comparison between the semantically annotated image and the current
3D mesh labels; this improves the robustness in case of noisy segmentations.Comment: {\pounds}D Reconstruction Meets Semantic, ICCV worksho
Predicting the Next Best View for 3D Mesh Refinement
3D reconstruction is a core task in many applications such as robot
navigation or sites inspections. Finding the best poses to capture part of the
scene is one of the most challenging topic that goes under the name of Next
Best View. Recently, many volumetric methods have been proposed; they choose
the Next Best View by reasoning over a 3D voxelized space and by finding which
pose minimizes the uncertainty decoded into the voxels. Such methods are
effective, but they do not scale well since the underlaying representation
requires a huge amount of memory. In this paper we propose a novel mesh-based
approach which focuses on the worst reconstructed region of the environment
mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and
an energy function over the worst regions of the mesh which takes into account
the mutual parallax with respect to the previous cameras, the angle of
incidence of the viewing ray to the surface and the visibility of the region.
We test our approach over a well known dataset and achieve state-of-the-art
results.Comment: 13 pages, 5 figures, to be published in IAS-1
Learning to Construct 3D Building Wireframes from 3D Line Clouds
Line clouds, though under-investigated in the previous work, potentially
encode more compact structural information of buildings than point clouds
extracted from multi-view images. In this work, we propose the first network to
process line clouds for building wireframe abstraction. The network takes a
line cloud as input , i.e., a nonstructural and unordered set of 3D line
segments extracted from multi-view images, and outputs a 3D wireframe of the
underlying building, which consists of a sparse set of 3D junctions connected
by line segments. We observe that a line patch, i.e., a group of neighboring
line segments, encodes sufficient contour information to predict the existence
and even the 3D position of a potential junction, as well as the likelihood of
connectivity between two query junctions. We therefore introduce a two-layer
Line-Patch Transformer to extract junctions and connectivities from sampled
line patches to form a 3D building wireframe model. We also introduce a
synthetic dataset of multi-view images with ground-truth 3D wireframe. We
extensively justify that our reconstructed 3D wireframe models significantly
improve upon multiple baseline building reconstruction methods. The code and
data can be found at https://github.com/Luo1Cheng/LC2WF.Comment: 10 pages, 6 figure
- …