1,014 research outputs found

    Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization

    Get PDF
    International audienceThis paper presents a method for the 3D reconstruction of a piecewise-planar surface from range images, typi-cally laser scans with millions of points. The reconstructed surface is a watertight polygonal mesh that conforms to observations at a given scale in the visible planar parts of the scene, and that is plausible in hidden parts. We formulate surface reconstruction as a discrete optimization problem based on detected and hypothesized planes. One of our major contributions, besides a treatment of data anisotropy and novel surface hypotheses, is a regu-larization of the reconstructed surface w.r.t. the length of edges and the number of corners. Compared to classical area-based regularization, it better captures surface complexity and is therefore better suited for man-made en-vironments, such as buildings. To handle the underlying higher-order potentials, that are problematic for MRF optimizers, we formulate minimization as a sparse mixed-integer linear programming problem and obtain an ap-proximate solution using a simple relaxation. Experiments show that it is fast and reaches near-optimal solutions

    GASP : Geometric Association with Surface Patches

    Full text link
    A fundamental challenge to sensory processing tasks in perception and robotics is the problem of obtaining data associations across views. We present a robust solution for ascertaining potentially dense surface patch (superpixel) associations, requiring just range information. Our approach involves decomposition of a view into regularized surface patches. We represent them as sequences expressing geometry invariantly over their superpixel neighborhoods, as uniquely consistent partial orderings. We match these representations through an optimal sequence comparison metric based on the Damerau-Levenshtein distance - enabling robust association with quadratic complexity (in contrast to hitherto employed joint matching formulations which are NP-complete). The approach is able to perform under wide baselines, heavy rotations, partial overlaps, significant occlusions and sensor noise. The technique does not require any priors -- motion or otherwise, and does not make restrictive assumptions on scene structure and sensor movement. It does not require appearance -- is hence more widely applicable than appearance reliant methods, and invulnerable to related ambiguities such as textureless or aliased content. We present promising qualitative and quantitative results under diverse settings, along with comparatives with popular approaches based on range as well as RGB-D data.Comment: International Conference on 3D Vision, 201

    Assessment of a photogrammetric approach for urban DSM extraction from tri-stereoscopic satellite imagery

    Get PDF
    Built-up environments are extremely complex for 3D surface modelling purposes. The main distortions that hamper 3D reconstruction from 2D imagery are image dissimilarities, concealed areas, shadows, height discontinuities and discrepancies between smooth terrain and man-made features. A methodology is proposed to improve automatic photogrammetric extraction of an urban surface model from high resolution satellite imagery with the emphasis on strategies to reduce the effects of the cited distortions and to make image matching more robust. Instead of a standard stereoscopic approach, a digital surface model is derived from tri-stereoscopic satellite imagery. This is based on an extensive multi-image matching strategy that fully benefits from the geometric and radiometric information contained in the three images. The bundled triplet consists of an IKONOS along-track pair and an additional near-nadir IKONOS image. For the tri-stereoscopic study a densely built-up area, extending from the centre of Istanbul to the urban fringe, is selected. The accuracy of the model extracted from the IKONOS triplet, as well as the model extracted from only the along-track stereopair, are assessed by comparison with 3D check points and 3D building vector data

    Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery

    Get PDF
    In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data

    Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans

    Full text link
    We propose an unsupervised method for parsing large 3D scans of real-world scenes into interpretable parts. Our goal is to provide a practical tool for analyzing 3D scenes with unique characteristics in the context of aerial surveying and mapping, without relying on application-specific user annotations. Our approach is based on a probabilistic reconstruction model that decomposes an input 3D point cloud into a small set of learned prototypical shapes. Our model provides an interpretable reconstruction of complex scenes and leads to relevant instance and semantic segmentations. To demonstrate the usefulness of our results, we introduce a novel dataset of seven diverse aerial LiDAR scans. We show that our method outperforms state-of-the-art unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. Our method offers significant advantage over existing approaches, as it does not require any manual annotations, making it a practical and efficient tool for 3D scene analysis. Our code and dataset are available at https://imagine.enpc.fr/~loiseaur/learnable-earth-parse

    3D Reconstruction of Building Rooftop and Power Line Models in Right-of-Ways Using Airborne LiDAR Data

    Get PDF
    The research objectives aimed to achieve thorough the thesis are to develop methods for reconstructing models of building and PL objects of interest in the power line (PL) corridor area from airborne LiDAR data. For this, it is mainly concerned with the model selection problem for which model is more optimal in representing the given data set. This means that the parametric relations and geometry of object shapes are unknowns and optimally determined by the verification of hypothetical models. Therefore, the proposed method achieves high adaptability to the complex geometric forms of building and PL objects. For the building modeling, the method of implicit geometric regularization is proposed to rectify noisy building outline vectors which are due to noisy data. A cost function for the regularization process is designed based on Minimum Description Length (MDL) theory, which favours smaller deviation between a model and observation as well as orthogonal and parallel properties between polylines. Next, a new approach, called Piecewise Model Growing (PMG), is proposed for 3D PL model reconstruction using a catenary curve model. It piece-wisely grows to capture all PL points of interest and thus produces a full PL 3D model. However, the proposed method is limited to the PL scene complexity, which causes PL modeling errors such as partial, under- and over-modeling errors. To correct the incompletion of PL models, the inner and across span analysis are carried out, which leads to replace erroneous PL segments by precise PL models. The inner span analysis is performed based on the MDL theory to correct under- and over-modeling errors. The across span analysis is subsequently carried out to correct partial-modeling errors by finding start and end positions of PLs which denotes Point Of Attachment (POA). As a result, this thesis addresses not only geometrically describing building and PL objects but also dealing with noisy data which causes the incompletion of models. In the practical aspects, the results of building and PL modeling should be essential to effectively analyze a PL scene and quickly alleviate the potentially hazardous scenarios jeopardizing the PL system

    3D detection of roof sections from a single satellite image and application to LOD2-building reconstruction

    Full text link
    Reconstructing urban areas in 3D out of satellite raster images has been a long-standing and challenging goal of both academical and industrial research. The rare methods today achieving this objective at a Level Of Details 22 rely on procedural approaches based on geometry, and need stereo images and/or LIDAR data as input. We here propose a method for urban 3D reconstruction named KIBS(\textit{Keypoints Inference By Segmentation}), which comprises two novel features: i) a full deep learning approach for the 3D detection of the roof sections, and ii) only one single (non-orthogonal) satellite raster image as model input. This is achieved in two steps: i) by a Mask R-CNN model performing a 2D segmentation of the buildings' roof sections, and after blending these latter segmented pixels within the RGB satellite raster image, ii) by another identical Mask R-CNN model inferring the heights-to-ground of the roof sections' corners via panoptic segmentation, unto full 3D reconstruction of the buildings and city. We demonstrate the potential of the KIBS method by reconstructing different urban areas in a few minutes, with a Jaccard index for the 2D segmentation of individual roof sections of 88.55%88.55\% and 75.21%75.21\% on our two data sets resp., and a height's mean error of such correctly segmented pixels for the 3D reconstruction of 1.601.60 m and 2.062.06 m on our two data sets resp., hence within the LOD2 precision range
    corecore