68 research outputs found

    Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery

    Get PDF
    In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data

    Relating Multimodal Imagery Data in 3D

    Get PDF
    This research develops and improves the fundamental mathematical approaches and techniques required to relate imagery and imagery derived multimodal products in 3D. Image registration, in a 2D sense, will always be limited by the 3D effects of viewing geometry on the target. Therefore, effects such as occlusion, parallax, shadowing, and terrain/building elevation can often be mitigated with even a modest amounts of 3D target modeling. Additionally, the imaged scene may appear radically different based on the sensed modality of interest; this is evident from the differences in visible, infrared, polarimetric, and radar imagery of the same site. This thesis develops a `model-centric\u27 approach to relating multimodal imagery in a 3D environment. By correctly modeling a site of interest, both geometrically and physically, it is possible to remove/mitigate some of the most difficult challenges associated with multimodal image registration. In order to accomplish this feat, the mathematical framework necessary to relate imagery to geometric models is thoroughly examined. Since geometric models may need to be generated to apply this `model-centric\u27 approach, this research develops methods to derive 3D models from imagery and LIDAR data. Of critical note, is the implementation of complimentary techniques for relating multimodal imagery that utilize the geometric model in concert with physics based modeling to simulate scene appearance under diverse imaging scenarios. Finally, the often neglected final phase of mapping localized image registration results back to the world coordinate system model for final data archival are addressed. In short, once a target site is properly modeled, both geometrically and physically, it is possible to orient the 3D model to the same viewing perspective as a captured image to enable proper registration. If done accurately, the synthetic model\u27s physical appearance can simulate the imaged modality of interest while simultaneously removing the 3-D ambiguity between the model and the captured image. Once registered, the captured image can then be archived as a texture map on the geometric site model. In this way, the 3D information that was lost when the image was acquired can be regained and properly related with other datasets for data fusion and analysis

    IM2ELEVATION: Building Height Estimation from Single-View Aerial Imagery

    Get PDF
    Estimation of the Digital Surface Model (DSM) and building heights from single-view aerial imagery is a challenging inherently ill-posed problem that we address in this paper by resorting to machine learning. We propose an end-to-end trainable convolutional-deconvolutional deep neural network architecture that enables learning mapping from a single aerial imagery to a DSM for analysis of urban scenes. We perform multisensor fusion of aerial optical and aerial light detection and ranging (Lidar) data to prepare the training data for our pipeline. The dataset quality is key to successful estimation performance. Typically, a substantial amount of misregistration artifacts are present due to georeferencing/projection errors, sensor calibration inaccuracies, and scene changes between acquisitions. To overcome these issues, we propose a registration procedure to improve Lidar and optical data alignment that relies on Mutual Information, followed by Hough transform-based validation step to adjust misregistered image patches. We validate our building height estimation model on a high-resolution dataset captured over central Dublin, Ireland: Lidar point cloud of 2015 and optical aerial images from 2017. These data allow us to validate the proposed registration procedure and perform 3D model reconstruction from single-view aerial imagery. We also report state-of-the-art performance of our proposed architecture on several popular DSM estimation datasets

    Semi-Automated DIRSIG scene modeling from 3D lidar and passive imagery

    Get PDF
    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes ”on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery. Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery. These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used include multiple-return point information provided by an Optech lidar linescanning sensor, multispectral frame array imagery from the Wildfire Airborne Sensor Program (WASP) and WASP-lite sensors, and hyperspectral data from the Modular Imaging Spectrometer Instrument (MISI) and the COMPact Airborne Spectral Sensor (COMPASS). Information from these image sources was fused and processed using the semi-automated approach to provide the DIRSIG input files used to define a synthetic scene. When compared to the standard manual process for creating these files, we achieved approximately a tenfold increase in speed, as well as a significant increase in geometric accuracy

    AUTOMATIC EXTRACTION OF CONTROL POINTS FROM 3D LIDAR MOBILE MAPPING AND UAV IMAGERY FOR AERIAL TRIANGULATION

    Get PDF
    Installing targets and measuring them as ground control points (GCPs) are time consuming and cost inefficient tasks in a UAV photogrammetry project. This research aims to automatically extract GCPs from 3D LiDAR mobile mapping system (L-MMS) measurements and UAV imagery to perform aerial triangulation in a UAV photogrammetric network. The L-MMS allows to acquire 3D point clouds of an urban environment including floors and facades of buildings with an accuracy of a few centimetres. Integration of UAV imagery, as complementary information enables to reduce the acquisition time of measurement as well as increasing the automation level in a production line. Therefore, a higher quality measurements and more diverse products are obtained. This research hypothesises that the spatial accuracy of the L-MMS is higher than that of the UAV photogrammetric point clouds. The tie points are extracted from the UAV imagery based on the well-known SIFT method, and then matched. The structure from motion (SfM) algorithm is applied to estimate the 3D object coordinates of the matched tie points. Rigid registration is carried out between the point clouds obtained from the L-MMS and the SfM. For each tie point extracted from the SfM point clouds, their corresponding neighbouring points are selected from the L-MMS point clouds, and then a plane is fitted and then a tie point was projected on the plane, and this is how the LiDAR-based control points (LCPs) are calculated. The re-projection error of the analyses carried out on a test data sets of the Glian area in Iran show a half pixel size accuracy standing for a few centimetres range accuracy. Finally, a significant increasing of speed up in survey operations besides improving the spatial accuracy of the extracted LCPs are achieved

    An investigation into semi-automated 3D city modelling

    Get PDF
    Creating three dimensional digital representations of urban areas, also known as 3D city modelling, is essential in many applications, such as urban planning, radio frequency signal propagation, flight simulation and vehicle navigation, which are of increasing importance in modern society urban centres. The main aim of the thesis is the development of a semi-automated, innovative workflow for creating 3D city models using aerial photographs and LiDAR data collected from various airborne sensors. The complexity of this aim necessitates the development of an efficient and reliable way to progress from manually intensive operations to an increased level of automation. The proposed methodology exploits the combination of different datasets, also known as data fusion, to achieve reliable results in different study areas. Data fusion techniques are used to combine linear features, extracted from aerial photographs, with either LiDAR data or any other source available including Very Dense Digital Surface Models (VDDSMs). The research proposes a method which employs a semi automated technique for 3D city modelling by fusing LiDAR if available or VDDSMs with 3D linear features extracted from stereo pairs of photographs. The building detection and the generation of the building footprint is performed with the use of a plane fitting algorithm on the LiDAR or VDDSMs using conditions based on the slope of the roofs and the minimum size of the buildings. The initial building footprint is subsequently generalized using a simplification algorithm that enhances the orthogonality between the individual linear segments within a defined tolerance. The final refinement of the building outline is performed for each linear segment using the filtered stereo matched points with a least squares estimation. The digital reconstruction of the roof shapes is performed by implementing a least squares-plane fitting algorithm on the classified VDDSMs, which is restricted by the building outlines, the minimum size of the planes and the maximum height tolerance between adjacent 3D points. Subsequently neighbouring planes are merged using Boolean operations for generation of solid features. The results indicate very detailed building models. Various roof details such as dormers and chimneys are successfully reconstructed in most cases

    An investigation into semi-automated 3D city modelling

    Get PDF
    Creating three dimensional digital representations of urban areas, also known as 3D city modelling, is essential in many applications, such as urban planning, radio frequency signal propagation, flight simulation and vehicle navigation, which are of increasing importance in modern society urban centres. The main aim of the thesis is the development of a semi-automated, innovative workflow for creating 3D city models using aerial photographs and LiDAR data collected from various airborne sensors. The complexity of this aim necessitates the development of an efficient and reliable way to progress from manually intensive operations to an increased level of automation. The proposed methodology exploits the combination of different datasets, also known as data fusion, to achieve reliable results in different study areas. Data fusion techniques are used to combine linear features, extracted from aerial photographs, with either LiDAR data or any other source available including Very Dense Digital Surface Models (VDDSMs). The research proposes a method which employs a semi automated technique for 3D city modelling by fusing LiDAR if available or VDDSMs with 3D linear features extracted from stereo pairs of photographs. The building detection and the generation of the building footprint is performed with the use of a plane fitting algorithm on the LiDAR or VDDSMs using conditions based on the slope of the roofs and the minimum size of the buildings. The initial building footprint is subsequently generalized using a simplification algorithm that enhances the orthogonality between the individual linear segments within a defined tolerance. The final refinement of the building outline is performed for each linear segment using the filtered stereo matched points with a least squares estimation. The digital reconstruction of the roof shapes is performed by implementing a least squares-plane fitting algorithm on the classified VDDSMs, which is restricted by the building outlines, the minimum size of the planes and the maximum height tolerance between adjacent 3D points. Subsequently neighbouring planes are merged using Boolean operations for generation of solid features. The results indicate very detailed building models. Various roof details such as dormers and chimneys are successfully reconstructed in most cases

    Forest structure from terrestrial laser scanning – in support of remote sensing calibration/validation and operational inventory

    Get PDF
    Forests are an important part of the natural ecosystem, providing resources such as timber and fuel, performing services such as energy exchange and carbon storage, and presenting risks, such as fire damage and invasive species impacts. Improved characterization of forest structural attributes is desirable, as it could improve our understanding and management of these natural resources. However, the traditional, systematic collection of forest information – dubbed “forest inventory” – is time-consuming, expensive, and coarse when compared to novel 3-D measurement technologies. Remote sensing estimates, on the other hand, provide synoptic coverage, but often fail to capture the fine- scale structural variation of the forest environment. Terrestrial laser scanning (TLS) has demonstrated a potential to address these limitations, but its operational use has remained limited due to unsatisfactory performance characteristics vs. budgetary constraints of many end-users. To address this gap, my dissertation advanced affordable mobile laser scanning capabilities for operational forest structure assessment. We developed geometric reconstruction of forest structure from rapid-scan, low-resolution point cloud data, providing for automatic extraction of standard forest inventory metrics. To augment these results over larger areas, we designed a view-invariant feature descriptor to enable marker-free registration of TLS data pairs, without knowledge of the initial sensor pose. Finally, a graph-theory framework was integrated to perform multi-view registration between a network of disconnected scans, which provided improved assessment of forest inventory variables. This work addresses a major limitation related to the inability of TLS to assess forest structure at an operational scale, and may facilitate improved understanding of the phenomenology of airborne sensing systems, by providing fine-scale reference data with which to interpret the active or passive electromagnetic radiation interactions with forest structure. Outputs are being utilized to provide antecedent science data for NASA’s HyspIRI mission and to support the National Ecological Observatory Network’s (NEON) long-term environmental monitoring initiatives
    • …
    corecore