5 research outputs found

    Multi-Scale Surface Reconstruction from Images

    Get PDF
    Many surface reconstruction algorithms have been developed to process point data originating from laser scans. Because laser scanning is a very expensive technique and not available to everyone, 3D reconstruction from images (using, e.g., multi-view stereo) is a promising alternative. In recent years a lot of progress has been made in the computer vision domain and nowadays algorithms are capable of reconstructing large 3D scenes from consumer photographs. Whereas laser scans are very controlled and typically only a few scans are taken, images may be subject to more uncontrolled variations. Standard multi-view stereo algorithms give rise to multi-scale data points due to different camera resolutions, focal lengths, or various distances to the object. When reconstructing a surface from this data, the multi-scale property has to be taken into account because the assumption that the points are samples from the true surface might be violated. This thesis presents two surface reconstruction algorithms that take resolution and scale differences into account. In the first approach we model the uncertainty of each sample point according to its footprint, the surface area that was taken into account during multi-view stereo. With an adaptive volumetric resolution, also steered by the footprints of the sample points, we achieve detailed reconstructions even for large-scale scenes. Then, a general wavelet-based surface reconstruction framework is presented. The multi-scale sample points are characterized by a convolution kernel and the points are fused in frequency space while preserving locality. We suggest a specific implementation for 2.5D surfaces that incorporates our theoretic findings about sample points originating from multi-view stereo and shows promising results on real-world data sets. The other part of the thesis analyzes the scale characteristics of patch-based depth reconstruction as used in many (multi-view) stereo techniques. It is driven by the question how the reconstruction preserves surface details or high frequencies. We introduce an intuitive model for the reconstruction process, prove that it yields a linear system and determine the modulation transfer function. This allows us to predict the amplitude loss of high frequencies in connection with the used patch-size and the internal and external camera parameters. Experiments on synthetic and real-world data demonstrate the accuracy of our model but also show the limitations. Finally, we propose a generalization of the model allowing for weighted patch fitting. The reconstructed points can then be described by a convolution of the original surface and we show how weighting the pixels during photo-consistency optimization affects the smoothing kernel. In this way we are able to connect a standard notion of smoothing to multi-view stereo reconstruction. In summary, this thesis provides a profound analysis of patch-based (multi-view) stereo reconstruction and introduces new concepts for surface reconstruction from the resulting multi-scale sample points

    Dense and Globally Consistent Multi-View Stereo

    Get PDF
    Multi-View Stereo (MVS) aims at reconstructing dense geometry of scenes from a set of overlapping images which are captured at different viewing angles. This thesis is devoted to addressing MVS problem by estimating depth maps, since 2D-space operations are trivially parallelizable in contrast to 3D volumetric techniques. Typical setup of depth-map-based MVS approaches consists of per-view calculation and multi-view merging. Most solutions primarily aim at the most precise and complete surfaces for individual views but relaxing the global geometry consistency. Therefore, the inconsistent estimates lead to heavy processing workload in the merging stage and diminish the final reconstruction. Another issue is the textureless areas where the photo-consistency constraint can not discriminate different depths. These matching ambiguities are normally handled by incorporating plane features or the smoothness assumption, that might produce segmentation effect or depends on accuracy and completeness of the calculated object edges. This thesis deals with two kinds of input data, photo collections and high-frame-rate videos, by developing distinct MVS algorithms based on their characteristics: For the sparsely sampled photos, we propose an advanced PatchMatch system that alternates between patch-based correlation maximization and pixel-based optimization of the cross-view consistency. Thereby we get a good trade-off between the photometric and geometric constraints. Moreover, our method achieves high efficiency by combining local pixel traversal and a hierarchical framework for fast depth propagation. For the densely sampled videos, we mainly focus on recovering the homogeneous surfaces, because the redundant scene information enables ray-level correlation which can generate shape depth discontinuities. Our approach infers smooth surfaces for the enclosed areas using perspective depth interpolation, and subsequently tackles the occlusion errors connecting the fore- and background edges. In addition, our edge depth estimation is more robust by accounting for unstructured camera trajectories. Exhaustively calculating depth maps is unfeasible when modeling large scenes from videos. This thesis further improves the reconstruction scalability using an incremental scheme via content-aware view selection and clustering. Our goal is to gradually eliminate the visibility conflicts and increase the surface coverage by processing a minimum subset of views. Constructing view clusters allows us to store merged and locally consistent points with the highest resolution, thus reducing the memory requirements. All approaches presented in the thesis do not rely on high-level techniques, so they can be easily parallelized. The evaluations on various datasets and the comparisons with existing algorithms demonstrate the superiority of our methods

    Surface reconstruction from multi-resolution sample points

    No full text
    Robust surface reconstruction from sample points is a challenging problem, especially for real-world input data. We significantly improve on a recent method by Hornung and Kobbelt [HK06b] by implementing three major extensions. First, we exploit the footprint information inherent to each sample point, that describes the underlying surface region represented by that sample. We interpret each sample as a vote for a region in space where the size of the region depends on the footprint size. In our method, sample points with large footprints do not destroy the fine detail captured by sample points with small footprints. Second, we propose a new crust computation making the method applicable to a substantially broader range of input data. This includes data from objects that were only partially sampled, a common case for data generated by multi-view stereo applied to Internet images. Third, we adapt the volumetric resolution locally to the footprint size of the sample points which allows to extract fine detail even in large-scale scenes. The effectiveness of our extensions is shown on challenging outdoor data sets as well as on a standard benchmark
    corecore