10 research outputs found
Self-correction of 3D reconstruction from multi-view stereo images
We present a self-correction approach to improving the
3D reconstruction of a multi-view 3D photogrammetry system.
The self-correction approach has been able to repair
the reconstructed 3D surface damaged by depth discontinuities.
Due to self-occlusion, multi-view range images
have to be acquired and integrated into a watertight nonredundant
mesh model in order to cover the extended surface
of an imaged object. The integrated surface often suffers
from “dent” artifacts produced by depth discontinuities
in the multi-view range images. In this paper we propose
a novel approach to correcting the 3D integrated surface
such that the dent artifacts can be repaired automatically.
We show examples of 3D reconstruction to demonstrate the
improvement that can be achieved by the self-correction
approach. This self-correction approach can be extended
to integrate range images obtained from alternative range
capture devices
An evaluation method for multiview surface reconstruction algorithms
We propose a new method...
Saliency-guided integration of multiple scans
we present a novel method..
3DMADMAC|AUTOMATED: synergistic hardware and software solution for automated 3D digitization of cultural heritage objects
In this article a fully automated 3D shape measurement system and data processing algorithms are presented. Main purpose of this system is to automatically (without any user intervention) and rapidly (at least ten times faster than manual measurement) digitize whole object’s surface with some limitations to its properties: maximum measurement volume is described as a cylinder with 2,8m height and 0,6m radius, maximum object's weight is 2 tons. Measurement head is automatically calibrated by the system for chosen working volume (from 120mm x 80mm x 60mm and ends up to 1,2m x 0,8m x 0,6m). Positioning of measurement head in relation to measured object is realized by computer-controlled manipulator. The system is equipped with two independent collision detection modules to prevent damaging measured object with moving sensor’s head. Measurement process is divided into three steps. First step is used for locating any part of object’s surface in assumed measurement volume. Second step is related to calculation of "next best view" position of measurement head on the base of existing 3D scans. Finally small holes in measured 3D surface are detected and measured. All 3D data processing (filtering, ICP based fitting and final views integration) is performed automatically. Final 3D model is created on the base of user specified parameters like accuracy of surface representation and/or density of surface sampling. In the last section of the paper, exemplary measurement result of two objects: biscuit (from the collection of Museum Palace at Wilanów) and Roman votive altar (Lower Moesia, II-III AD) are presented
Accurate Integration of Multi-view Range Images Using K-Means Clustering
3D modelling finds a wide range of applications in industry. However, due to the presence of surface scanning noise, accumulative registration errors, and improper data fusion, reconstructed object surfaces using range images captured from multiple viewpoints are often distorted with thick patches, false connections, blurred features and artefacts. Moreover, the existing integration methods are often expensive in the sense of both computational time and data storage. These shortcomings limit the wide applications of 3D modelling using the latest laser scanning systems. In this paper, the k-means clustering approach (from the pattern recognition and machine learning literatures) is employed to minimize the integration error and to optimize the fused point locations. To initialize the clustering approach, an automatic method is developed, shifting points in the overlapping areas between neighbouring views towards each other, so that the initialized cluster centroids are in between the two overlapping surfaces. This results in more efficient and effective integration of data. While the overlapping areas were initially detected using a single distance threshold, they are then refined using the k-means clustering method. For more accurate integration results, a weighting scheme reflecting the imaging principle is developed to integrate the corresponding points in the overlapping areas. The fused point set is finally triangulated using an improved Delaunay method, guaranteeing a watertight surface. A comparative study based on real images shows that the proposed algorithm is efficient in the sense of either running time or memory usage and reduces significantly the integration error, while desirably retaining geometric details of 3D object surfaces of interest
Building models from multiple point sets with kernel density estimation
One of the fundamental problems in computer vision is point set registration. Point
set registration finds use in many important applications and in particular can be considered
one of the crucial stages involved in the reconstruction of models of physical
objects and environments from depth sensor data. The problem of globally aligning
multiple point sets, representing spatial shape measurements from varying sensor viewpoints,
into a common frame of reference is a complex task that is imperative due to
the large number of critical functions that accurate and reliable model reconstructions
contribute to.
In this thesis we focus on improving the quality and feasibility of model and environment
reconstruction through the enhancement of multi-view point set registration
techniques. The thesis makes the following contributions: First, we demonstrate that
employing kernel density estimation to reason about the unknown generating surfaces
that range sensors measure allows us to express measurement variability, uncertainty
and also to separate the problems of model design and viewpoint alignment optimisation.
Our surface estimates define novel view alignment objective functions that inform
the registration process. Our surfaces can be estimated from point clouds in a datadriven
fashion. Through experiments on a variety of datasets we demonstrate that we
have developed a novel and effective solution to the simultaneous multi-view registration
problem.
We then focus on constructing a distributed computation framework capable of solving
generic high-throughput computational problems. We present a novel task-farming
model that we call Semi-Synchronised Task Farming (SSTF), capable of modelling and
subsequently solving computationally distributable problems that benefit from both
independent and dependent distributed components and a level of communication between
process elements. We demonstrate that this framework is a novel schema for
parallel computer vision algorithms and evaluate the performance to establish computational
gains over serial implementations. We couple this framework with an accurate
computation-time prediction model to contribute a novel structure appropriate for
addressing expensive real-world algorithms with substantial parallel performance and
predictable time savings.
Finally, we focus on a timely instance of the multi-view registration problem: modern
range sensors provide large numbers of viewpoint samples that result in an abundance
of depth data information. The ability to utilise this abundance of depth data in a
feasible and principled fashion is of importance to many emerging application areas
making use of spatial information. We develop novel methodology for the registration
of depth measurements acquired from many viewpoints capturing physical object
surfaces. By defining registration and alignment quality metrics based on our density
estimation framework we construct an optimisation methodology that implicitly considers
all viewpoints simultaneously. We use a non-parametric data-driven approach
to consider varying object complexity and guide large view-set spatial transform optimisations.
By aligning large numbers of partial, arbitrary-pose views we evaluate this
strategy quantitatively on large view-set range sensor data where we find that we can
improve registration accuracy over existing methods and contribute increased registration
robustness to the magnitude of coarse seed alignment. This allows large-scale
registration on problem instances exhibiting varying object complexity with the added
advantage of massive parallel efficiency