7,945 research outputs found

    The Application of LiDAR to Assessment of Rooftop Solar Photovoltaic Deployment Potential in a Municipal District Unit

    Get PDF
    A methodology is provided for the application of Light Detection and Ranging (LiDAR) to automated solar photovoltaic (PV) deployment analysis on the regional scale. Challenges in urban information extraction and management for solar PV deployment assessment are determined and quantitative solutions are offered. This paper provides the following contributions: (i) a methodology that is consistent with recommendations from existing literature advocating the integration of cross-disciplinary competences in remote sensing (RS), GIS, computer vision and urban environmental studies; (ii) a robust methodology that can work with low-resolution, incomprehensive data and reconstruct vegetation and building separately, but concurrently; (iii) recommendations for future generation of software. A case study is presented as an example of the methodology. Experience from the case study such as the trade-off between time consumption and data quality are discussed to highlight a need for connectivity between demographic information, electrical engineering schemes and GIS and a typical factor of solar useful roofs extracted per method. Finally, conclusions are developed to provide a final methodology to extract the most useful information from the lowest resolution and least comprehensive data to provide solar electric assessments over large areas, which can be adapted anywhere in the world

    Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery

    Get PDF
    In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Investigation on roof segmentation for 3D building reconstruction from aerial LIDAR point clouds

    Get PDF
    Three-dimensional (3D) reconstruction techniques are increasingly used to obtain 3D representations of buildings due to the broad range of applications for 3D city models related to sustainability, efficiency and resilience (i.e., energy demand estimation, estimation of the propagation of noise in an urban environment, routing and accessibility, flood or seismic damage assessment). With advancements in airborne laser scanning (ALS), 3D modeling of urban topography has increased its potential to automatize extraction of the characteristics of individual buildings. In 3D building modeling from light detection and ranging (LIDAR) point clouds, one major challenging issue is how to efficiently and accurately segment building regions and extract rooftop features. This study aims to present an investigation and critical comparison of two different fully automatic roof segmentation approaches for 3D building reconstruction. In particular, the paper presents and compares a cluster-based roof segmentation approach that uses (a) a fuzzy c-means clustering method refined through a density clustering and connectivity analysis, and (b) a region growing segmentation approach combined with random sample consensus (RANSAC) method. In addition, a robust 2.5D dual contouring method is utilized to deliver watertight 3D building modeling from the results of each proposed segmentation approach. The benchmark LIDAR point clouds and related reference data (generated by stereo plotting) of 58 buildings over downtown Toronto (Canada), made available to the scientific community by the International Society for Photogrammetry and Remote Sensing (ISPRS), have been used to evaluate the quality of the two proposed segmentation approaches by analysing the geometrical accuracy of the roof polygons. Moreover, the results of both approaches have been evaluated under different operating conditions against the real measurements (based on archive documentation and celerimetric surveys realized by a total station system) of a complex building located in the historical center of Matera (UNESCO world heritage site in southern Italy) that has been manually reconstructed in 3D via traditional Building Information Modeling (BIM) technique. The results demonstrate that both methods reach good performance metrics in terms of geometry accuracy. However, approach (b), based on region growing segmentation, exhibited slightly better performance but required greater computational time than the clustering-based approach

    Automatic Roof Plane Detection and Analysis in Airborne Lidar Point Clouds for Solar Potential Assessment

    Get PDF
    A relative height threshold is defined to separate potential roof points from the point cloud, followed by a segmentation of these points into homogeneous areas fulfilling the defined constraints of roof planes. The normal vector of each laser point is an excellent feature to decompose the point cloud into segments describing planar patches. An object-based error assessment is performed to determine the accuracy of the presented classification. It results in 94.4% completeness and 88.4% correctness. Once all roof planes are detected in the 3D point cloud, solar potential analysis is performed for each point. Shadowing effects of nearby objects are taken into account by calculating the horizon of each point within the point cloud. Effects of cloud cover are also considered by using data from a nearby meteorological station. As a result the annual sum of the direct and diffuse radiation for each roof plane is derived. The presented method uses the full 3D information for both feature extraction and solar potential analysis, which offers a number of new applications in fields where natural processes are influenced by the incoming solar radiation (e.g., evapotranspiration, distribution of permafrost). The presented method detected fully automatically a subset of 809 out of 1,071 roof planes where the arithmetic mean of the annual incoming solar radiation is more than 700 kWh/m2

    Object-based Urban Building Footprint Extraction and 3D Building Reconstruction from Airborne LiDAR Data

    Get PDF
    Buildings play an essential role in urban intra-construction, urban planning, climate studies and disaster management. The precise knowledge of buildings not only serves as a primary source for interpreting complex urban characteristics, but also provides decision makers with more realistic and multidimensional scenarios for urban management. In this thesis, the 2D extraction and 3D reconstruction methods are proposed to map and visualize urban buildings. Chapter 2 presents an object-based method for extraction of building footprints using LiDAR derived NDTI (Normalized Difference Tree Index) and intensity data. The overall accuracy of 94.0% and commission error of 6.3% in building extraction is achieved with the Kappa of 0.84. Chapter 3 presents a GIS-based 3D building reconstruction method. The results indicate that the method is effective for generating 3D building models. The 91.4% completeness of roof plane identification is achieved, and the overall accuracy of the flat and pitched roof plane classification is 88.81%, with the user’s accuracy of the flat roof plane 97.75% and pitched roof plane 100%

    Automated Building Information Extraction and Evaluation from High-resolution Remotely Sensed Data

    Get PDF
    The two-dimensional (2D) footprints and three-dimensional (3D) structures of buildings are of great importance to city planning, natural disaster management, and virtual environmental simulation. As traditional manual methodologies for collecting 2D and 3D building information are often both time consuming and costly, automated methods are required for efficient large area mapping. It is challenging to extract building information from remotely sensed data, considering the complex nature of urban environments and their associated intricate building structures. Most 2D evaluation methods are focused on classification accuracy, while other dimensions of extraction accuracy are ignored. To assess 2D building extraction methods, a multi-criteria evaluation system has been designed. The proposed system consists of matched rate, shape similarity, and positional accuracy. Experimentation with four methods demonstrates that the proposed multi-criteria system is more comprehensive and effective, in comparison with traditional accuracy assessment metrics. Building height is critical for building 3D structure extraction. As data sources for height estimation, digital surface models (DSMs) that are derived from stereo images using existing software typically provide low accuracy results in terms of rooftop elevations. Therefore, a new image matching method is proposed by adding building footprint maps as constraints. Validation demonstrates that the proposed matching method can estimate building rooftop elevation with one third of the error encountered when using current commercial software. With an ideal input DSM, building height can be estimated by the elevation contrast inside and outside a building footprint. However, occlusions and shadows cause indistinct building edges in the DSMs generated from stereo images. Therefore, a “building-ground elevation difference model” (EDM) has been designed, which describes the trend of the elevation difference between a building and its neighbours, in order to find elevation values at bare ground. Experiments using this novel approach report that estimated building height with 1.5m residual, which out-performs conventional filtering methods. Finally, 3D buildings are digitally reconstructed and evaluated. Current 3D evaluation methods did not present the difference between 2D and 3D evaluation methods well; traditionally, wall accuracy is ignored. To address these problems, this thesis designs an evaluation system with three components: volume, surface, and point. As such, the resultant multi-criteria system provides an improved evaluation method for building reconstruction
    corecore