1,472 research outputs found
Variational Uncalibrated Photometric Stereo under General Lighting
Photometric stereo (PS) techniques nowadays remain constrained to an ideal
laboratory setup where modeling and calibration of lighting is amenable. To
eliminate such restrictions, we propose an efficient principled variational
approach to uncalibrated PS under general illumination. To this end, the
Lambertian reflectance model is approximated through a spherical harmonic
expansion, which preserves the spatial invariance of the lighting. The joint
recovery of shape, reflectance and illumination is then formulated as a single
variational problem. There the shape estimation is carried out directly in
terms of the underlying perspective depth map, thus implicitly ensuring
integrability and bypassing the need for a subsequent normal integration. To
tackle the resulting nonconvex problem numerically, we undertake a two-phase
procedure to initialize a balloon-like perspective depth map, followed by a
"lagged" block coordinate descent scheme. The experiments validate efficiency
and robustness of this approach. Across a variety of evaluations, we are able
to reduce the mean angular error consistently by a factor of 2-3 compared to
the state-of-the-art.Comment: Haefner and Ye contributed equall
Semantically Informed Multiview Surface Refinement
We present a method to jointly refine the geometry and semantic segmentation
of 3D surface meshes. Our method alternates between updating the shape and the
semantic labels. In the geometry refinement step, the mesh is deformed with
variational energy minimization, such that it simultaneously maximizes
photo-consistency and the compatibility of the semantic segmentations across a
set of calibrated images. Label-specific shape priors account for interactions
between the geometry and the semantic labels in 3D. In the semantic
segmentation step, the labels on the mesh are updated with MRF inference, such
that they are compatible with the semantic segmentations in the input images.
Also, this step includes prior assumptions about the surface shape of different
semantic classes. The priors induce a tight coupling, where semantic
information influences the shape update and vice versa. Specifically, we
introduce priors that favor (i) adaptive smoothing, depending on the class
label; (ii) straightness of class boundaries; and (iii) semantic labels that
are consistent with the surface orientation. The novel mesh-based
reconstruction is evaluated in a series of experiments with real and synthetic
data. We compare both to state-of-the-art, voxel-based semantic 3D
reconstruction, and to purely geometric mesh refinement, and demonstrate that
the proposed scheme yields improved 3D geometry as well as an improved semantic
segmentation
Multi-View Stereo with Single-View Semantic Mesh Refinement
While 3D reconstruction is a well-established and widely explored research
topic, semantic 3D reconstruction has only recently witnessed an increasing
share of attention from the Computer Vision community. Semantic annotations
allow in fact to enforce strong class-dependent priors, as planarity for ground
and walls, which can be exploited to refine the reconstruction often resulting
in non-trivial performance improvements. State-of-the art methods propose
volumetric approaches to fuse RGB image data with semantic labels; even if
successful, they do not scale well and fail to output high resolution meshes.
In this paper we propose a novel method to refine both the geometry and the
semantic labeling of a given mesh. We refine the mesh geometry by applying a
variational method that optimizes a composite energy made of a state-of-the-art
pairwise photo-metric term and a single-view term that models the semantic
consistency between the labels of the 3D mesh and those of the segmented
images. We also update the semantic labeling through a novel Markov Random
Field (MRF) formulation that, together with the classical data and smoothness
terms, takes into account class-specific priors estimated directly from the
annotated mesh. This is in contrast to state-of-the-art methods that are
typically based on handcrafted or learned priors. We are the first, jointly
with the very recent and seminal work of [M. Blaha et al arXiv:1706.08336,
2017], to propose the use of semantics inside a mesh refinement framework.
Differently from [M. Blaha et al arXiv:1706.08336, 2017], which adopts a more
classical pairwise comparison to estimate the flow of the mesh, we apply a
single-view comparison between the semantically annotated image and the current
3D mesh labels; this improves the robustness in case of noisy segmentations.Comment: {\pounds}D Reconstruction Meets Semantic, ICCV worksho
Mesh-based 3D Textured Urban Mapping
In the era of autonomous driving, urban mapping represents a core step to let
vehicles interact with the urban context. Successful mapping algorithms have
been proposed in the last decade building the map leveraging on data from a
single sensor. The focus of the system presented in this paper is twofold: the
joint estimation of a 3D map from lidar data and images, based on a 3D mesh,
and its texturing. Indeed, even if most surveying vehicles for mapping are
endowed by cameras and lidar, existing mapping algorithms usually rely on
either images or lidar data; moreover both image-based and lidar-based systems
often represent the map as a point cloud, while a continuous textured mesh
representation would be useful for visualization and navigation purposes. In
the proposed framework, we join the accuracy of the 3D lidar data, and the
dense information and appearance carried by the images, in estimating a
visibility consistent map upon the lidar measurements, and refining it
photometrically through the acquired images. We evaluate the proposed framework
against the KITTI dataset and we show the performance improvement with respect
to two state of the art urban mapping algorithms, and two widely used surface
reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201
- …