348 research outputs found

    Self-correction of 3D reconstruction from multi-view stereo images

    Get PDF
    We present a self-correction approach to improving the 3D reconstruction of a multi-view 3D photogrammetry system. The self-correction approach has been able to repair the reconstructed 3D surface damaged by depth discontinuities. Due to self-occlusion, multi-view range images have to be acquired and integrated into a watertight nonredundant mesh model in order to cover the extended surface of an imaged object. The integrated surface often suffers from “dent” artifacts produced by depth discontinuities in the multi-view range images. In this paper we propose a novel approach to correcting the 3D integrated surface such that the dent artifacts can be repaired automatically. We show examples of 3D reconstruction to demonstrate the improvement that can be achieved by the self-correction approach. This self-correction approach can be extended to integrate range images obtained from alternative range capture devices

    Multiple depth maps integration for 3D reconstruction using geodesic graph cuts

    Get PDF
    Depth images, in particular depth maps estimated from stereo vision, may have a substantial amount of outliers and result in inaccurate 3D modelling and reconstruction. To address this challenging issue, in this paper, a graph-cut based multiple depth maps integration approach is proposed to obtain smooth and watertight surfaces. First, confidence maps for the depth images are estimated to suppress noise, based on which reliable patches covering the object surface are determined. These patches are then exploited to estimate the path weight for 3D geodesic distance computation, where an adaptive regional term is introduced to deal with the “shorter-cuts” problem caused by the effect of the minimal surface bias. Finally, the adaptive regional term and the boundary term constructed using patches are combined in the graph-cut framework for more accurate and smoother 3D modelling. We demonstrate the superior performance of our algorithm on the well-known Middlebury multi-view database and additionally on real-world multiple depth images captured by Kinect. The experimental results have shown that our method is able to preserve the object protrusions and details while maintaining surface smoothness

    Surface Reconstruction from Noisy and Sparse Data

    Get PDF
    We introduce a set of algorithms for registering, filtering and measuring the similarity of unorganized 3d point clouds, usually obtained from multiple views. We contribute a method for computing the similarity between point clouds that represent closed surfaces, specifically segmented tumors from CT scans. We obtain watertight surfaces and utilize volumetric overlap to determine similarity in a volumetric way. This similarity measure is used to quantify treatment variability based on target volume segmentation both prior to and following radiotherapy planning stages. We also contribute an algorithm for the drift-free registration of thin, non- rigid scans, where drift is the build-up of error caused by sequential pairwise registration, which is the alignment of each scan to its neighbor. We construct an average scan using mutual nearest neighbors, each scan is registered to this average scan, after which we update the average scan and continue this process until convergence. The use case herein is for merging scans of plants from multiple views and registering vascular scans together. Our final contribution is a method for filtering noisy point clouds, specif- ically those constructed from merged depth maps as obtained from a range scanner or multiple view stereo (MVS), applying techniques that have been utilized in finding outliers in clustered data, but not in MVS. We utilize ker- nel density estimation to obtain a probability density function over the space of observed points, utilizing variable bandwidths based on the nature of the neighboring points, Mahalanobis and reachability distances that is more dis- criminative than a classical Mahalanobis distance-based metric

    Multi-view reconstruction using photo-consistency and exact silhouette constraints: a maximum-flow formulation

    Full text link

    Visual-Guided Mesh Repair

    Full text link
    Mesh repair is a long-standing challenge in computer graphics and related fields. Converting defective meshes into watertight manifold meshes can greatly benefit downstream applications such as geometric processing, simulation, fabrication, learning, and synthesis. In this work, we first introduce three visual measures for visibility, orientation, and openness, based on ray-tracing. We then present a novel mesh repair framework that incorporates visual measures with several critical steps, i.e., open surface closing, face reorientation, and global optimization, to effectively repair defective meshes, including gaps, holes, self-intersections, degenerate elements, and inconsistent orientations. Our method reduces unnecessary mesh complexity without compromising geometric accuracy or visual quality while preserving input attributes such as UV coordinates for rendering. We evaluate our approach on hundreds of models randomly selected from ShapeNet and Thingi10K, demonstrating its effectiveness and robustness compared to existing approaches

    Progressive 3D reconstruction of unknown objects using one eye-in-hand camera

    Get PDF
    Proceedings of: 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO 2009) December 19-23, 2009, Guilin, ChinaThis paper presents a complete 3D-reconstruction method optimized for online object modeling in the context of object grasping by a robot hand. The proposed solution is based on images captured by an eye-in-hand camera mounted on the robot arm and is an original combination of classical but simplified reconstruction methods. The different techniques used form a process that offers fast, progressive and reactive reconstruction of the object.European Community's Seventh Framework ProgramThe research leading to these results has been partially supported by the HANDLE project, which has received funding from the European Communitity’s Seventh Framework Programme (FP7/2007-2013) under grant agreement ICT 23164

    Scalable Surface Reconstruction from Point Clouds with Extreme Scale and Density Diversity

    Get PDF
    In this paper we present a scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density (in our experiments three orders of magnitude). The backbone of our approach is a combination of octree data partitioning, local Delaunay tetrahedralization and graph cut optimization. Graph cut optimization is used twice, once to extract surface hypotheses from local Delaunay tetrahedralizations and once to merge overlapping surface hypotheses even when the local tetrahedralizations do not share the same topology.This formulation allows us to obtain a constant memory consumption per sub-problem while at the same time retaining the density independent interpolation properties of the Delaunay-based optimization. On multiple public datasets, we demonstrate that our approach is highly competitive with the state-of-the-art in terms of accuracy, completeness and outlier resilience. Further, we demonstrate the multi-scale potential of our approach by processing a newly recorded dataset with 2 billion points and a point density variation of more than four orders of magnitude - requiring less than 9GB of RAM per process.Comment: This paper was accepted to the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. The copyright was transfered to IEEE (ieee.org). The official version of the paper will be made available on IEEE Xplore (R) (ieeexplore.ieee.org). This version of the paper also contains the supplementary material, which will not appear IEEE Xplore (R

    Hierarchical Surface Prediction for 3D Object Reconstruction

    Full text link
    Recently, Convolutional Neural Networks have shown promising results for 3D geometry prediction. They can make predictions from very little input data such as a single color image. A major limitation of such approaches is that they only predict a coarse resolution voxel grid, which does not capture the surface of the objects well. We propose a general framework, called hierarchical surface prediction (HSP), which facilitates prediction of high resolution voxel grids. The main insight is that it is sufficient to predict high resolution voxels around the predicted surfaces. The exterior and interior of the objects can be represented with coarse resolution voxels. Our approach is not dependent on a specific input type. We show results for geometry prediction from color images, depth images and shape completion from partial voxel grids. Our analysis shows that our high resolution predictions are more accurate than low resolution predictions.Comment: 3DV 201

    Visualizing and Modeling Interior Spaces of Dangerous Structures using Lidar

    Get PDF
    LIght Detection and Ranging (LIDAR) scanning can be used to safely and remotely provide intelligence on the interior of dangerous structures for use by first responders that need to enter these structures. By scanning into structures through windows and other openings or moving the LIDAR scanning into the structure, in both cases carried by a remote controlled robotic crawler, the presence of dangerous items or personnel can be confi rmed or denied. Entry and egress pathways can be determined in advance, and potential hiding/ambush locations identifi ed. This paper describes an integrated system of a robotic crawler and LIDAR scanner. Both the scanner and the robot are wirelessly remote controlled from a single laptop computer. This includes navigation of the crawler with real-time video, self-leveling of the LIDAR platform, and the ability to raise the scanner up to heights of 2.5 m. Multiple scans can be taken from different angles to fi ll in detail and provide more complete coverage. These scans can quickly be registered to each other using user defi ned \u27pick points\u27, creating a single point cloud from multiple scans. Software has been developed to deconstruct the point clouds, and identify specifi c objects in the interior of the structure from the point cloud. Software has been developed to interactively visualize and walk through the modeled structures. Floor plans are automatically generated and a data export facility has been developed. Tests have been conducted on multiple structures, simulating many of the contingencies that a fi rst responder would face

    Dynamic shape capture using multi-view photometric stereo

    Full text link
    corecore