25 research outputs found
An Octree-Based Approach towards Efficient Variational Range Data Fusion
Volume-based reconstruction is usually expensive both in terms of memory
consumption and runtime. Especially for sparse geometric structures, volumetric
representations produce a huge computational overhead. We present an efficient
way to fuse range data via a variational Octree-based minimization approach by
taking the actual range data geometry into account. We transform the data into
Octree-based truncated signed distance fields and show how the optimization can
be conducted on the newly created structures. The main challenge is to uphold
speed and a low memory footprint without sacrificing the solutions' accuracy
during optimization. We explain how to dynamically adjust the optimizer's
geometric structure via joining/splitting of Octree nodes and how to define the
operators. We evaluate on various datasets and outline the suitability in terms
of performance and geometric accuracy.Comment: BMVC 201
3DMADMAC|AUTOMATED: synergistic hardware and software solution for automated 3D digitization of cultural heritage objects
In this article a fully automated 3D shape measurement system and data processing algorithms are presented. Main purpose of this system is to automatically (without any user intervention) and rapidly (at least ten times faster than manual measurement) digitize whole object’s surface with some limitations to its properties: maximum measurement volume is described as a cylinder with 2,8m height and 0,6m radius, maximum object's weight is 2 tons.  Measurement head is automatically calibrated by the system for chosen working volume (from 120mm x 80mm x 60mm and ends up to 1,2m x 0,8m x 0,6m). Positioning of measurement head in relation to measured object is realized by computer-controlled manipulator. The system is equipped with two independent collision detection modules to prevent damaging measured object with moving sensor’s head. Measurement process is divided into three steps. First step is used for locating any part of object’s surface in assumed measurement volume. Second step is related to calculation of "next best view" position of measurement head on the base of existing 3D scans. Finally small holes in measured 3D surface are detected and measured. All 3D data processing (filtering, ICP based fitting and final views integration) is performed automatically. Final 3D model is created on the base of user specified parameters like accuracy of surface representation and/or density of surface sampling. In the last section of the paper, exemplary measurement result of two objects: biscuit (from the collection of Museum Palace at Wilanów) and Roman votive altar (Lower Moesia, II-III AD) are presented
TVL<sub>1</sub> Planarity Regularization for 3D Shape Approximation
The modern emergence of automation in many industries has given impetus to extensive research into mobile robotics. Novel perception technologies now enable cars to drive autonomously, tractors to till a field automatically and underwater robots to construct pipelines. An essential requirement to facilitate both perception and autonomous navigation is the analysis of the 3D environment using sensors like laser scanners or stereo cameras. 3D sensors generate a very large number of 3D data points when sampling object shapes within an environment, but crucially do not provide any intrinsic information about the environment which the robots operate within.
This work focuses on the fundamental task of 3D shape reconstruction and modelling from 3D point clouds. The novelty lies in the representation of surfaces by algebraic functions having limited support, which enables the extraction of smooth consistent implicit shapes from noisy samples with a heterogeneous density. The minimization of total variation of second differential degree makes it possible to enforce planar surfaces which often occur in man-made environments. Applying the new technique means that less accurate, low-cost 3D sensors can be employed without sacrificing the 3D shape reconstruction accuracy
Semantic 3D Reconstruction with Finite Element Bases
We propose a novel framework for the discretisation of multi-label problems
on arbitrary, continuous domains. Our work bridges the gap between general FEM
discretisations, and labeling problems that arise in a variety of computer
vision tasks, including for instance those derived from the generalised Potts
model. Starting from the popular formulation of labeling as a convex relaxation
by functional lifting, we show that FEM discretisation is valid for the most
general case, where the regulariser is anisotropic and non-metric. While our
findings are generic and applicable to different vision problems, we
demonstrate their practical implementation in the context of semantic 3D
reconstruction, where such regularisers have proved particularly beneficial.
The proposed FEM approach leads to a smaller memory footprint as well as faster
computation, and it constitutes a very simple way to enable variable, adaptive
resolution within the same model
Confidence driven TGV fusion
We introduce a novel model for spatially varying variational data fusion,
driven by point-wise confidence values. The proposed model allows for the joint
estimation of the data and the confidence values based on the spatial coherence
of the data. We discuss the main properties of the introduced model as well as
suitable algorithms for estimating the solution of the corresponding biconvex
minimization problem and their convergence. The performance of the proposed
model is evaluated considering the problem of depth image fusion by using both
synthetic and real data from publicly available datasets
Simultaneous super-resolution, tracking and mapping
This paper proposes a new visual SLAM technique that not only integrates 6DOF pose and dense structure but also simultaneously integrates the color information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking