907 research outputs found
Coplanar Repeats by Energy Minimization
This paper proposes an automated method to detect, group and rectify
arbitrarily-arranged coplanar repeated elements via energy minimization. The
proposed energy functional combines several features that model how planes with
coplanar repeats are projected into images and captures global interactions
between different coplanar repeat groups and scene planes. An inference
framework based on a recent variant of -expansion is described and fast
convergence is demonstrated. We compare the proposed method to two widely-used
geometric multi-model fitting methods using a new dataset of annotated images
containing multiple scene planes with coplanar repeats in varied arrangements.
The evaluation shows a significant improvement in the accuracy of
rectifications computed from coplanar repeats detected with the proposed method
versus those detected with the baseline methods.Comment: 14 pages with supplemental materials attache
A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data
In this paper we present a practical approach for generating an
occlusion-free textured 3D map of urban facades by the synergistic use of
terrestrial images, 3D point clouds and area-based information. Particularly in
dense urban environments, the high presence of urban objects in front of the
facades causes significant difficulties for several stages in computational
building modeling. Major challenges lie on the one hand in extracting complete
3D facade quadrilateral delimitations and on the other hand in generating
occlusion-free facade textures. For these reasons, we describe a
straightforward approach for completing and recovering facade geometry and
textures by exploiting the data complementarity of terrestrial multi-source
imagery and area-based information
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
Automatic reconstruction of 3D models from images using multi-view
Structure-from-Motion methods has been one of the most fruitful outcomes of
computer vision. These advances combined with the growing popularity of Micro
Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools
ubiquitous for large number of Architecture, Engineering and Construction
applications among audiences, mostly unskilled in computer vision. However, to
obtain high-resolution and accurate reconstructions from a large-scale object
using SfM, there are many critical constraints on the quality of image data,
which often become sources of inaccuracy as the current 3D reconstruction
pipelines do not facilitate the users to determine the fidelity of input data
during the image acquisition. In this paper, we present and advocate a
closed-loop interactive approach that performs incremental reconstruction in
real-time and gives users an online feedback about the quality parameters like
Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We
also propose a novel multi-scale camera network design to prevent scene drift
caused by incremental map building, and release the first multi-scale image
sequence dataset as a benchmark. Further, we evaluate our system on real
outdoor scenes, and show that our interactive pipeline combined with a
multi-scale camera network approach provides compelling accuracy in multi-view
reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and
Automation (ICRA '15), Seattle, WA, US
AUTOMATIC FAÇADE SEGMENTATION FOR THERMAL RETROFIT
Abstract. In this paper we present an automated method to derive highly detailed 3D vector models of modern building facades from terrestrial laser scanning data. The developed procedure can be divided into two main steps: firstly the main elements constituting the facade are identified by means of a segmentation process, then the 3D vector model is generated including some priors on architectural scenes. The identification of main facade elements is based on random sampling and detection of planar elements including topology information in the process to reduce under- and over-segmentation problems. Finally, the prevalence of straight lines and orthogonal intersections in the vector model generation phase is exploited to set additional constraints to enforce automated modeling. Contemporary a further classification is performed, enriching the data with semantics by means of a classification tree. The main application field for these vector models is the design of external insulation thermal retrofit. In particular, in this paper we present a possible application for energy efficiency evaluation of buildings by mean of Infrared Thermography data overlaid to the facade model
AUTOMATIC IMAGE TO MODEL ALIGNMENT FOR PHOTO-REALISTIC URBAN MODEL RECONSTRUCTION
We introduce a hybrid approach in which images of an urban scene are automatically alignedwith a base geometry of the scene to determine model-relative external camera parameters. Thealgorithm takes as input a model of the scene and images with approximate external cameraparameters and aligns the images to the model by extracting the facades from the images andaligning the facades with the model by minimizing over a multivariate objective function. Theresulting image-pose pairs can be used to render photo-realistic views of the model via texturemapping.Several natural extensions to the base hybrid reconstruction technique are also introduced. Theseextensions, which include vanishing point based calibration refinement and video stream basedreconstruction, increase the accuracy of the base algorithm, reduce the amount of data that mustbe provided by the user as input to the algorithm, and provide a mechanism for automaticallycalibrating a large set of images for post processing steps such as automatic model enhancementand fly-through model visualization.Traditionally, photo-realistic urban reconstruction has been approached from purely image-basedor model-based approaches. Recently, research has been conducted on hybrid approaches, whichcombine the use of images and models. Such approaches typically require user assistance forcamera calibration. Our approach is an improvement over these methods because it does notrequire user assistance for camera calibration
TwinTex: Geometry-aware Texture Generation for Abstracted 3D Architectural Models
Coarse architectural models are often generated at scales ranging from
individual buildings to scenes for downstream applications such as Digital Twin
City, Metaverse, LODs, etc. Such piece-wise planar models can be abstracted as
twins from 3D dense reconstructions. However, these models typically lack
realistic texture relative to the real building or scene, making them
unsuitable for vivid display or direct reference. In this paper, we present
TwinTex, the first automatic texture mapping framework to generate a
photo-realistic texture for a piece-wise planar proxy. Our method addresses
most challenges occurring in such twin texture generation. Specifically, for
each primitive plane, we first select a small set of photos with greedy
heuristics considering photometric quality, perspective quality and facade
texture completeness. Then, different levels of line features (LoLs) are
extracted from the set of selected photos to generate guidance for later steps.
With LoLs, we employ optimization algorithms to align texture with geometry
from local to global. Finally, we fine-tune a diffusion model with a multi-mask
initialization component and a new dataset to inpaint the missing region.
Experimental results on many buildings, indoor scenes and man-made objects of
varying complexity demonstrate the generalization ability of our algorithm. Our
approach surpasses state-of-the-art texture mapping methods in terms of
high-fidelity quality and reaches a human-expert production level with much
less effort. Project page: https://vcc.tech/research/2023/TwinTex.Comment: Accepted to SIGGRAPH ASIA 202
- …