7,395 research outputs found

    Automatic normal orientation in point clouds of building interiors

    Full text link
    Orienting surface normals correctly and consistently is a fundamental problem in geometry processing. Applications such as visualization, feature detection, and geometry reconstruction often rely on the availability of correctly oriented normals. Many existing approaches for automatic orientation of normals on meshes or point clouds make severe assumptions on the input data or the topology of the underlying object which are not applicable to real-world measurements of urban scenes. In contrast, our approach is specifically tailored to the challenging case of unstructured indoor point cloud scans of multi-story, multi-room buildings. We evaluate the correctness and speed of our approach on multiple real-world point cloud datasets

    Can building footprint extraction from LiDAR be used productively in a topographic mapping context?

    Get PDF
    Chapter 3Light Detection and Ranging (LiDAR) is a quick and economical method for obtaining cloud-point data that can be used in various disciplines and a diversity of applications. LiDAR is a technique that is based on laser technology. The process looks at the two-way travel time of laser beams and measures the time and distance travelled between the laser sensor and the ground (Shan & Sampath, 2005). National Mapping Agencies (NMAs) have traditionally relied on manual methods, such as photogrammetric capture, to collect topographic detail. These methods are laborious, work-intensive, lengthy and hence, costly. In addition because photogrammetric capture methods are often time-consuming, by the time the capture has been carried out, the information source, that is the aerial photography, is out of date (Jenson and Cowen, 1999). Hence NMAs aspire to exploit methods of data capture that are efficient, quick, and cost-effective while producing high quality outputs, which is why the application of LiDAR within NMAs has been increasing. One application that has seen significant advances in the last decade is building footprint extraction (Shirowzhan and Lim, 2013). The buildings layer is a key reference dataset and having up-to-date, current and complete building information is of paramount importance, as can be witnessed with government agencies and the private sectors spending millions each year on aerial photography as a source for collecting building footprint information (Jenson and Cowen, 1999). In the last decade automatic extraction of building footprints from LiDAR data has improved sufficiently to be of an acceptable accuracy for urban planning (Shirowzhan and Lim, 2013).peer-reviewe

    A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    Get PDF
    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach

    Mesh-based 3D Textured Urban Mapping

    Get PDF
    In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201

    Data Fusion in a Hierarchical Segmentation Context: The Case of Building Roof Description

    Get PDF
    Automatic mapping of urban areas from aerial images is a challenging task for scientists an

    Automated Building Information Extraction and Evaluation from High-resolution Remotely Sensed Data

    Get PDF
    The two-dimensional (2D) footprints and three-dimensional (3D) structures of buildings are of great importance to city planning, natural disaster management, and virtual environmental simulation. As traditional manual methodologies for collecting 2D and 3D building information are often both time consuming and costly, automated methods are required for efficient large area mapping. It is challenging to extract building information from remotely sensed data, considering the complex nature of urban environments and their associated intricate building structures. Most 2D evaluation methods are focused on classification accuracy, while other dimensions of extraction accuracy are ignored. To assess 2D building extraction methods, a multi-criteria evaluation system has been designed. The proposed system consists of matched rate, shape similarity, and positional accuracy. Experimentation with four methods demonstrates that the proposed multi-criteria system is more comprehensive and effective, in comparison with traditional accuracy assessment metrics. Building height is critical for building 3D structure extraction. As data sources for height estimation, digital surface models (DSMs) that are derived from stereo images using existing software typically provide low accuracy results in terms of rooftop elevations. Therefore, a new image matching method is proposed by adding building footprint maps as constraints. Validation demonstrates that the proposed matching method can estimate building rooftop elevation with one third of the error encountered when using current commercial software. With an ideal input DSM, building height can be estimated by the elevation contrast inside and outside a building footprint. However, occlusions and shadows cause indistinct building edges in the DSMs generated from stereo images. Therefore, a “building-ground elevation difference model” (EDM) has been designed, which describes the trend of the elevation difference between a building and its neighbours, in order to find elevation values at bare ground. Experiments using this novel approach report that estimated building height with 1.5m residual, which out-performs conventional filtering methods. Finally, 3D buildings are digitally reconstructed and evaluated. Current 3D evaluation methods did not present the difference between 2D and 3D evaluation methods well; traditionally, wall accuracy is ignored. To address these problems, this thesis designs an evaluation system with three components: volume, surface, and point. As such, the resultant multi-criteria system provides an improved evaluation method for building reconstruction

    Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization

    Get PDF
    International audienceThis paper presents a method for the 3D reconstruction of a piecewise-planar surface from range images, typi-cally laser scans with millions of points. The reconstructed surface is a watertight polygonal mesh that conforms to observations at a given scale in the visible planar parts of the scene, and that is plausible in hidden parts. We formulate surface reconstruction as a discrete optimization problem based on detected and hypothesized planes. One of our major contributions, besides a treatment of data anisotropy and novel surface hypotheses, is a regu-larization of the reconstructed surface w.r.t. the length of edges and the number of corners. Compared to classical area-based regularization, it better captures surface complexity and is therefore better suited for man-made en-vironments, such as buildings. To handle the underlying higher-order potentials, that are problematic for MRF optimizers, we formulate minimization as a sparse mixed-integer linear programming problem and obtain an ap-proximate solution using a simple relaxation. Experiments show that it is fast and reaches near-optimal solutions

    Automatic Roof Plane Detection and Analysis in Airborne Lidar Point Clouds for Solar Potential Assessment

    Get PDF
    A relative height threshold is defined to separate potential roof points from the point cloud, followed by a segmentation of these points into homogeneous areas fulfilling the defined constraints of roof planes. The normal vector of each laser point is an excellent feature to decompose the point cloud into segments describing planar patches. An object-based error assessment is performed to determine the accuracy of the presented classification. It results in 94.4% completeness and 88.4% correctness. Once all roof planes are detected in the 3D point cloud, solar potential analysis is performed for each point. Shadowing effects of nearby objects are taken into account by calculating the horizon of each point within the point cloud. Effects of cloud cover are also considered by using data from a nearby meteorological station. As a result the annual sum of the direct and diffuse radiation for each roof plane is derived. The presented method uses the full 3D information for both feature extraction and solar potential analysis, which offers a number of new applications in fields where natural processes are influenced by the incoming solar radiation (e.g., evapotranspiration, distribution of permafrost). The presented method detected fully automatically a subset of 809 out of 1,071 roof planes where the arithmetic mean of the annual incoming solar radiation is more than 700 kWh/m2
    corecore