2,739 research outputs found
Efficient moving point handling for incremental 3D manifold reconstruction
As incremental Structure from Motion algorithms become effective, a good
sparse point cloud representing the map of the scene becomes available
frame-by-frame. From the 3D Delaunay triangulation of these points,
state-of-the-art algorithms build a manifold rough model of the scene. These
algorithms integrate incrementally new points to the 3D reconstruction only if
their position estimate does not change. Indeed, whenever a point moves in a 3D
Delaunay triangulation, for instance because its estimation gets refined, a set
of tetrahedra have to be removed and replaced with new ones to maintain the
Delaunay property; the management of the manifold reconstruction becomes thus
complex and it entails a potentially big overhead. In this paper we investigate
different approaches and we propose an efficient policy to deal with moving
points in the manifold estimation process. We tested our approach with four
sequences of the KITTI dataset and we show the effectiveness of our proposal in
comparison with state-of-the-art approaches.Comment: Accepted in International Conference on Image Analysis and Processing
(ICIAP 2015
Topological evaluation of volume reconstructions by voxel carving
Space or voxel carving [1, 4, 10, 15] is a technique for creating a three-dimensional reconstruction of an object from a series of two-dimensional images captured from cameras placed around the object at different viewing angles. However, little work has been done to date on evaluating the quality of space carving results. This paper extends the work reported in [8], where application of persistent homology was initially proposed as a tool for providing a topological analysis of the carving process along the sequence of 3D reconstructions with increasing number of cameras. We give now a more extensive treatment by: (1) developing the formal framework by which persistent homology can be applied in this context; (2) computing persistent homology of the 3D reconstructions of 66 new frames, including different poses, resolutions and camera orders; (3) studying what information about stability, topological correctness and influence of the camera orders in the carving performance can be drawn from the computed barcodes
Surface Modeling and Analysis Using Range Images: Smoothing, Registration, Integration, and Segmentation
This dissertation presents a framework for 3D reconstruction and scene analysis, using a set of range images. The motivation for developing this framework came from the needs to reconstruct the surfaces of small mechanical parts in reverse engineering tasks, build a virtual environment of indoor and outdoor scenes, and understand 3D images.
The input of the framework is a set of range images of an object or a scene captured by range scanners. The output is a triangulated surface that can be segmented into meaningful parts. A textured surface can be reconstructed if color images are provided. The framework consists of surface smoothing, registration, integration, and segmentation.
Surface smoothing eliminates the noise present in raw measurements from range scanners. This research proposes area-decreasing flow that is theoretically identical to the mean curvature flow. Using area-decreasing flow, there is no need to estimate the curvature value and an optimal step size of the flow can be obtained. Crease edges and sharp corners are preserved by an adaptive scheme.
Surface registration aligns measurements from different viewpoints in a common coordinate system. This research proposes a new surface representation scheme named point fingerprint. Surfaces are registered by finding corresponding point pairs in an overlapping region based on fingerprint comparison.
Surface integration merges registered surface patches into a whole surface. This research employs an implicit surface-based integration technique. The proposed algorithm can generate watertight models by space carving or filling the holes based on volumetric interpolation. Textures from different views are integrated inside a volumetric grid. Surface segmentation is useful to decompose CAD models in reverse engineering tasks and help object recognition in a 3D scene. This research proposes a watershed-based surface mesh segmentation approach. The new algorithm accurately segments the plateaus by geodesic erosion using fast marching method.
The performance of the framework is presented using both synthetic and real world data from different range scanners. The dissertation concludes by summarizing the development of the framework and then suggests future research topics
Designing a topological algorithm for 3D activity recognition
Voxel carving is a non-invasive and low-cost technique that is used for the reconstruction of a 3D volume from images captured from a set of cameras placed around the object of interest. In this paper we propose a method to topologically analyze a video sequence of 3D reconstructions representing a tennis player performing different forehand and backhand strokes with the aim of providing an approach that could be useful in other sport activities
3D Object Reconstruction using Multi-View Calibrated Images
In this study, two models are proposed, one is a visual hull model and another one is a 3D object reconstruction model. The proposed visual hull model, which is based on bounding edge representation, obtains high time performance which makes it to be one of the best methods. The main contribution of the proposed visual hull model is to provide bounding surfaces over the bounding edges, which results a complete triangular surface mesh. Moreover, the proposed visual hull model can be computed over the camera networks distributedly. The second model is a depth map based 3D object reconstruction model which results a watertight triangular surface mesh. The proposed model produces the result with acceptable accuracy as well as high completeness, only using stereo matching and triangulation. The contribution of this model is to playing with the 3D points to find the best reliable ones and fitting a surface over them
Structure from Motion with Higher-level Environment Representations
Computer vision is an important area focusing on understanding,
extracting and using the information from vision-based sensor. It
has many applications such as vision-based 3D reconstruction,
simultaneous localization and mapping(SLAM) and data-driven
understanding of the real world. Vision is a fundamental sensing
modality in many different fields of application.
While the traditional structure from motion mostly uses sparse
point-based feature, this thesis aims to explore the possibility
of using higher order feature representation. It starts with a
joint work which uses straight line for feature representation
and performs bundle adjustment with straight line
parameterization. Then, we further try an even higher order
representation where we use Bezier spline for parameterization.
We start with a simple case where all contours are lying on the
plane and uses Bezier splines to parametrize the curves in the
background and optimize on both camera position and Bezier
splines. For application, we present a complete end-to-end
pipeline which produces meaningful dense 3D models from natural
data of a 3D object: the target object is placed on a structured
but unknown planar background that is modeled with splines. The
data is captured using only a hand-held monocular camera.
However, this application is limited to a planar scenario and we
manage to push the parameterizations into real 3D. Following the
potential of this idea, we introduce a more flexible higher-order
extension of points that provide a general model for structural
edges in the environment, no matter if straight or curved. Our
model relies on linked BÂŽezier curves, the geometric intuition
of which proves great benefits during parameter initialization
and regularization. We present the
first fully automatic pipeline that is able to generate
spline-based representations without any human supervision.
Besides a full graphical formulation of the problem, we introduce
both geometric and photometric cues as well as higher-level
concepts such overall curve visibility and viewing angle
restrictions to automatically manage the correspondences in the
graph. Results prove that curve-based structure from motion with
splines is able to outperform state-of-the-art sparse
feature-based methods, as well as to model curved edges in the
environment
A Distributed Approach for Real Time 3D Modeling
International audienceThis paper addresses the problem of real time 3D modeling from images with multiple cameras. Environments where multiple cameras and PCs are present are becoming usual, mainly due to new camera technologies and high computing power of modern PCs. However most applications in computer vision are based on a single, or few PCs for computations and do not scale. Our motivation in this paper is therefore to propose a distributed framework which allows to compute precise 3D models in real time with a variable number of cameras, this through an optimal use of the several PCs which are generally present. We focus in this paper on silhouette based modeling approaches and investigate how to efficiently partition the associated tasks over a set of PCs. Our contribution is a distribution scheme that applies to the different types of approaches in this field and allows for real time applications. Such a scheme relies on different accessible levels of parallelization, from individual task partitions to concurrent executions, yielding in turn controls on both latency and frame rate of the modeling system. We report on the application of the presented framework to visual hull modeling applications. In particular, we show that precise surface models can be computed in real time with standard components. Results with synthetic data and preliminary results in real contexts are presented
- âŠ