444 research outputs found

    Can building footprint extraction from LiDAR be used productively in a topographic mapping context?

    Get PDF
    Chapter 3Light Detection and Ranging (LiDAR) is a quick and economical method for obtaining cloud-point data that can be used in various disciplines and a diversity of applications. LiDAR is a technique that is based on laser technology. The process looks at the two-way travel time of laser beams and measures the time and distance travelled between the laser sensor and the ground (Shan & Sampath, 2005). National Mapping Agencies (NMAs) have traditionally relied on manual methods, such as photogrammetric capture, to collect topographic detail. These methods are laborious, work-intensive, lengthy and hence, costly. In addition because photogrammetric capture methods are often time-consuming, by the time the capture has been carried out, the information source, that is the aerial photography, is out of date (Jenson and Cowen, 1999). Hence NMAs aspire to exploit methods of data capture that are efficient, quick, and cost-effective while producing high quality outputs, which is why the application of LiDAR within NMAs has been increasing. One application that has seen significant advances in the last decade is building footprint extraction (Shirowzhan and Lim, 2013). The buildings layer is a key reference dataset and having up-to-date, current and complete building information is of paramount importance, as can be witnessed with government agencies and the private sectors spending millions each year on aerial photography as a source for collecting building footprint information (Jenson and Cowen, 1999). In the last decade automatic extraction of building footprints from LiDAR data has improved sufficiently to be of an acceptable accuracy for urban planning (Shirowzhan and Lim, 2013).peer-reviewe

    Semi-Automated DIRSIG scene modeling from 3D lidar and passive imagery

    Get PDF
    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes ”on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery. Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery. These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used include multiple-return point information provided by an Optech lidar linescanning sensor, multispectral frame array imagery from the Wildfire Airborne Sensor Program (WASP) and WASP-lite sensors, and hyperspectral data from the Modular Imaging Spectrometer Instrument (MISI) and the COMPact Airborne Spectral Sensor (COMPASS). Information from these image sources was fused and processed using the semi-automated approach to provide the DIRSIG input files used to define a synthetic scene. When compared to the standard manual process for creating these files, we achieved approximately a tenfold increase in speed, as well as a significant increase in geometric accuracy

    Relating Multimodal Imagery Data in 3D

    Get PDF
    This research develops and improves the fundamental mathematical approaches and techniques required to relate imagery and imagery derived multimodal products in 3D. Image registration, in a 2D sense, will always be limited by the 3D effects of viewing geometry on the target. Therefore, effects such as occlusion, parallax, shadowing, and terrain/building elevation can often be mitigated with even a modest amounts of 3D target modeling. Additionally, the imaged scene may appear radically different based on the sensed modality of interest; this is evident from the differences in visible, infrared, polarimetric, and radar imagery of the same site. This thesis develops a `model-centric\u27 approach to relating multimodal imagery in a 3D environment. By correctly modeling a site of interest, both geometrically and physically, it is possible to remove/mitigate some of the most difficult challenges associated with multimodal image registration. In order to accomplish this feat, the mathematical framework necessary to relate imagery to geometric models is thoroughly examined. Since geometric models may need to be generated to apply this `model-centric\u27 approach, this research develops methods to derive 3D models from imagery and LIDAR data. Of critical note, is the implementation of complimentary techniques for relating multimodal imagery that utilize the geometric model in concert with physics based modeling to simulate scene appearance under diverse imaging scenarios. Finally, the often neglected final phase of mapping localized image registration results back to the world coordinate system model for final data archival are addressed. In short, once a target site is properly modeled, both geometrically and physically, it is possible to orient the 3D model to the same viewing perspective as a captured image to enable proper registration. If done accurately, the synthetic model\u27s physical appearance can simulate the imaged modality of interest while simultaneously removing the 3-D ambiguity between the model and the captured image. Once registered, the captured image can then be archived as a texture map on the geometric site model. In this way, the 3D information that was lost when the image was acquired can be regained and properly related with other datasets for data fusion and analysis

    3D Point Clouds in Urban Planning: Developing and Releasing High-end Methodologies based on LiDAR and UAV Data for the Extraction of Building Parameters

    Get PDF
    Os dados geográficos têm um papel determinante na formalização do plano urbano, enquanto instrumento de planeamento e documento normativo que define juridicamente as obrigações públicas e vincula os particulares, num determinado período temporal, no que respeita a disciplina urbanística de uma cidade ou de um aglomerado urbano, estabelecendo regras de uso e de ocupação do solo. O plano está associado a um processo, designado processo de planeamento; processo esse que e constituído por um conjunto de fases, dinâmicas e adaptativas, que se iniciam na sua elaboração e terminam na avaliação dos desvios entre o determinado no documento inicial e as metas e objectivos efectivamente atingidos. O plano, o processo e a praxis do planeamento exigem dados geográficos actualizados a cada instante, quer para as acções de monitorização quer para os momentos de avaliação. Um dos aspectos cruciais do plano e a quantificação da volumetria do espaço edificado existente. Outro aspecto, também fundamental, é o da gestão dessa volumetria; quer da volumetria existente quer da volumetria adicional. O tema da volumetria dos espaços edificados tem constituído, aliás, um dos temas mais sensíveis quando se trata da densificação do espaço urbano existente ou do desenho de novos espaços urbanos de expansão. Considerando o quadro teórico apresentado, o tema central da tese trata da modelação de nuvens de pontos 3D obtidas por tecnologia LiDAR e por UAV, para as aplicações na elaboração do plano e no processo de planeamento urbano, designadamente quantificação dos parâmetros urbanísticos altura da fachada e volume dos edifícios.A exploração do tema central da tese suporta-se em dois níveis: o nível da operacionalização e o nível da usabilidade. O nível da operacionalizão concretiza dois objectivos: i) demonstração da relevância e da pertinência da extracção, medição e geovisualização 3D dos parâmetros urbanísticos baseadas na experimentação e implementação de técnicas de geoprocessamento; ii) demonstração da pertinência dos parâmetros urbanísticos extraídos considerando distintas morfologias urbanas. Para o nível da usabilidade de nem-se igualmente dois objectivos: i) demonstração da usabilidade dos parâmetros urbanísticos extraídos avaliando o erro associado a extracção; ii) demonstração da usabilidade dos parâmetros urbanísticos extraídos para planeamento, em particular para o mapeamento dasimétrico de alta precisão. Da investigação decorre uma solução metodol ogica. A solução metodológica nomeada 3D Extraction Building Parameters (3DEBP) destina-se a extracção da área, da altura da fachada e do volume dos edifícios a partir de nuvens de pontos 3D. Esta solução foi criada tendo por base um conjunto de ferramentas FOSS: PostgreSQL/PostGIS, QGIS, GRASS e R-stats. Foram realizados testes em duas áreas urbanas com morfologias distintas: Praia de Faro (morfologia irregular) e Amadora (morfologia regular). O teste sobre a área urbana da Praia de Faro utilizou uma nuvem de pontos LiDAR e uma outra extra da de levantamento realizado por UAV. O teste sobre um quarteirão urbano de Amadora foi realizado apenas sobre nuvem de pontos UAV. Os testes revelaram que a qualidade da informação extra da e dependente da morfologia urbana. Nas conclusões discute-se a medição 3D com base em dados obtidos por tecnologia LiDAR e UAV, questiona-se a implementação de soluções FOSS para diferentes fases do processo de planeamento e defende-se a introdução intensiva da modelação 3D no plano urbano do futuro.Geographical data plays a major role in urban plan development, both as a planning instrument and as a normative document that legally de nes public obligations and binds individuals, in a given period of time, regarding the urban aspect of a city or an urban conglomerate, and establishes standards for land use and land cover. The plan is associated with a process, called the planning process, which consists in a set of dynamic and adaptive phases that begin with its development and end with the evaluation of any discrepancies between the provisions of the original document and the accomplished goals and objectives. The plan, the process, and the planning praxis require up-to-date geographical data at all times, both for monitoring actions and for the evaluation phases. One of the crucial aspects of the plan is the quanti cation of the existing building volume. Another fundamental aspect is managing that volume: both regarding the existing volume and any additional volumes. Actually, the building volume in built areas has been one of the most sensitive topics on the densi cation of existing urban spaces or the design of new growing urban areas. Considering the existing theoretical framework, the central topic of this thesis focuses on 3D point cloud modelling obtained from LiDAR and UAV technologies, employed in the development of a plan and in the urban planning process, namely regarding two speci c building parameters { building height and volume. The explanation of the central topic of this thesis is twofold: implementation and usability. The implementation level has two goals: i) demonstrating the relevance and pertinence of the extraction, measurement, and 3D geovisualization of building parameters based on the experimentation and implementation of geoprocessing techniques; ii) demonstrating the pertinence of the extracted building parameters considering di erent urban morphologies. At the usability level, we de ned two goals: i) demonstrating the usability of the extracted building parameters, evaluating the error associated with the extraction; ii) demonstrating the usability of these parameters for planning, particularly for high precision dasymetric mapping. Based on our research, we propose a methodological solution termed. 3D Extraction Building Parameters (3DEBP) and aimed at extracting areas, fa cade height, and building volumes from 3D point clouds. This solution was created with the following set of FOSS tools: PostgreSQL/PostGIS, GRASS, QGIS, and R-stats. We performed several tests in two urban areas with di erent morphologies: Praia de Faro (irregular morphology) and Amadora (regular morphology). The former (Praia de Faro) used a LiDAR point cloud and another one extracted from a UAV survey, while the latter (urban neighbourhood of Amadora) only used a UAV point cloud. Both experiments reveal that the quality of the information extracted depends on urban morphology. Finally, we discuss 3D measurement based on data obtained from LiDAR and UAV technology, raising questions on the implementation of FOSS solutions for di erent phases of the planning process, and arguing for the intensive introduction of 3D modelling for the future of urban planning

    Automated flight planning for roof inspection using a face-based approach

    Get PDF
    The rapid proliferation of consumer small unmanned aerial systems (sUASs) has expanded ownership to include amateurs and professionals alike. These platforms in combination with numerous open source and proprietary applications tailored to gather aerial imagery and generate 3D point clouds and meshes from aerial imagery, have made 3D modeling available to anyone who can afford an entry-level sUAS. These flight plans force the sensor to remain at greater distances from their targets, resulting in varying spatial resolution of sloped surfaces. The work described here explains the development of a variety of 3D automated flight plans to provide vantage points not achievable by constant-altitude, nadir-looking imagery. Specifically, the issue of roof inspection is addressed in detail. This work generates an automated flight plan that positions the sUAS and orients its sensor such that the focal plane array is parallel to the roof plane based on a priori knowledge of the roof\u27s geometry, greatly reducing single- or two-point perspective. This a priori knowledge can come from a variety sources including databases, a site survey, or data extracted from an existing point cloud. Still images or video from orthogonal flight plans can be used for visual inspection, or the generation of dense point clouds and meshes. These products are compared to those generated from nadir imagery. This novel flight planning approach permits the aircraft to fly the orthogonal flight plans from start to finish without intervention from the remote pilot. This work is scalable to similar sUAS-based tasks including aerial-based thermography of buildings and infrastructure

    Airborne LiDAR for DEM generation: some critical issues

    Get PDF
    Airborne LiDAR is one of the most effective and reliable means of terrain data collection. Using LiDAR data for DEM generation is becoming a standard practice in spatial related areas. However, the effective processing of the raw LiDAR data and the generation of an efficient and high-quality DEM remain big challenges. This paper reviews the recent advances of airborne LiDAR systems and the use of LiDAR data for DEM generation, with special focus on LiDAR data filters, interpolation methods, DEM resolution, and LiDAR data reduction. Separating LiDAR points into ground and non-ground is the most critical and difficult step for DEM generation from LiDAR data. Commonly used and most recently developed LiDAR filtering methods are presented. Interpolation methods and choices of suitable interpolator and DEM resolution for LiDAR DEM generation are discussed in detail. In order to reduce the data redundancy and increase the efficiency in terms of storage and manipulation, LiDAR data reduction is required in the process of DEM generation. Feature specific elements such as breaklines contribute significantly to DEM quality. Therefore, data reduction should be conducted in such a way that critical elements are kept while less important elements are removed. Given the highdensity characteristic of LiDAR data, breaklines can be directly extracted from LiDAR data. Extraction of breaklines and integration of the breaklines into DEM generation are presented

    Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning

    Get PDF
    The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper

    Semi-Automated DIRSIG Scene Modeling from 3D LIDAR and Passive Imaging Sources

    Get PDF
    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (LIDAR) models have also been incorporated into the software, providing an extremely powerful tool for algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes “on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, we are initiating a research effort that aims to reduce the man-in-the-loop requirements for several aspects of synthetic hyperspectral scene construction. Through a fusion of 3D LIDAR data with passive imagery, we are working to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks will also realize a shortened implementation time through this application of multi-modal imagery. This paper reports on the progress made thus far in achieving these objectives

    Toward knowledge-based automatic 3D spatial topological modeling from LiDAR point clouds for urban areas

    Get PDF
    Le traitement d'un très grand nombre de données LiDAR demeure très coûteux et nécessite des approches de modélisation 3D automatisée. De plus, les nuages de points incomplets causés par l'occlusion et la densité ainsi que les incertitudes liées au traitement des données LiDAR compliquent la création automatique de modèles 3D enrichis sémantiquement. Ce travail de recherche vise à développer de nouvelles solutions pour la création automatique de modèles géométriques 3D complets avec des étiquettes sémantiques à partir de nuages de points incomplets. Un cadre intégrant la connaissance des objets à la modélisation 3D est proposé pour améliorer la complétude des modèles géométriques 3D en utilisant un raisonnement qualitatif basé sur les informations sémantiques des objets et de leurs composants, leurs relations géométriques et spatiales. De plus, nous visons à tirer parti de la connaissance qualitative des objets en reconnaissance automatique des objets et à la création de modèles géométriques 3D complets à partir de nuages de points incomplets. Pour atteindre cet objectif, plusieurs solutions sont proposées pour la segmentation automatique, l'identification des relations topologiques entre les composants de l'objet, la reconnaissance des caractéristiques et la création de modèles géométriques 3D complets. (1) Des solutions d'apprentissage automatique ont été proposées pour la segmentation sémantique automatique et la segmentation de type CAO afin de segmenter des objets aux structures complexes. (2) Nous avons proposé un algorithme pour identifier efficacement les relations topologiques entre les composants d'objet extraits des nuages de points afin d'assembler un modèle de Représentation Frontière. (3) L'intégration des connaissances sur les objets et la reconnaissance des caractéristiques a été développée pour inférer automatiquement les étiquettes sémantiques des objets et de leurs composants. Afin de traiter les informations incertitudes, une solution de raisonnement automatique incertain, basée sur des règles représentant la connaissance, a été développée pour reconnaître les composants du bâtiment à partir d'informations incertaines extraites des nuages de points. (4) Une méthode heuristique pour la création de modèles géométriques 3D complets a été conçue en utilisant les connaissances relatives aux bâtiments, les informations géométriques et topologiques des composants du bâtiment et les informations sémantiques obtenues à partir de la reconnaissance des caractéristiques. Enfin, le cadre proposé pour améliorer la modélisation 3D automatique à partir de nuages de points de zones urbaines a été validé par une étude de cas visant à créer un modèle de bâtiment 3D complet. L'expérimentation démontre que l'intégration des connaissances dans les étapes de la modélisation 3D est efficace pour créer un modèle de construction complet à partir de nuages de points incomplets.The processing of a very large set of LiDAR data is very costly and necessitates automatic 3D modeling approaches. In addition, incomplete point clouds caused by occlusion and uneven density and the uncertainties in the processing of LiDAR data make it difficult to automatic creation of semantically enriched 3D models. This research work aims at developing new solutions for the automatic creation of complete 3D geometric models with semantic labels from incomplete point clouds. A framework integrating knowledge about objects in urban scenes into 3D modeling is proposed for improving the completeness of 3D geometric models using qualitative reasoning based on semantic information of objects and their components, their geometric and spatial relations. Moreover, we aim at taking advantage of the qualitative knowledge of objects in automatic feature recognition and further in the creation of complete 3D geometric models from incomplete point clouds. To achieve this goal, several algorithms are proposed for automatic segmentation, the identification of the topological relations between object components, feature recognition and the creation of complete 3D geometric models. (1) Machine learning solutions have been proposed for automatic semantic segmentation and CAD-like segmentation to segment objects with complex structures. (2) We proposed an algorithm to efficiently identify topological relationships between object components extracted from point clouds to assemble a Boundary Representation model. (3) The integration of object knowledge and feature recognition has been developed to automatically obtain semantic labels of objects and their components. In order to deal with uncertain information, a rule-based automatic uncertain reasoning solution was developed to recognize building components from uncertain information extracted from point clouds. (4) A heuristic method for creating complete 3D geometric models was designed using building knowledge, geometric and topological relations of building components, and semantic information obtained from feature recognition. Finally, the proposed framework for improving automatic 3D modeling from point clouds of urban areas has been validated by a case study aimed at creating a complete 3D building model. Experiments demonstrate that the integration of knowledge into the steps of 3D modeling is effective in creating a complete building model from incomplete point clouds

    A Method for detection and quantification of building damage using post-disaster LiDAR data

    Get PDF
    There is a growing need for rapid and accurate damage assessment following natural disasters, terrorist attacks, and other crisis situations. The use of light detection and ranging (LiDAR) data to detect and quantify building damage following a natural disaster was investigated in this research. Using LiDAR data collected by the Rochester Institute of Technology (RIT) just days after the January 12, 2010 Haiti earthquake, a set of processes was developed for extracting buildings in urban environments and assessing structural damage. Building points were separated from the rest of the point cloud using a combination of point classification techniques involving height, intensity, and multiple return information, as well as thresholding and morphological filtering operations. Damage was detected by measuring the deviation between building roof points and dominant planes found using a normal vector and height variance approach. The devised algorithms were incorporated into a Matlab graphical user interface (GUI), which guided the workflow and allowed for user interaction. The semi-autonomous tool ingests a discrete-return LiDAR point cloud of a post-disaster scene, and outputs a building damage map highlighting damaged and collapsed buildings. The entire approach was demonstrated on a set of six validation sites, carefully selected from the Haiti LiDAR data. A combined 85.6% of the truth buildings in all of the sites were detected, with a standard deviation of 15.3%. Damage classification results were evaluated against the Global Earth Observation - Catastrophe Assessment Network (GEO-CAN) and Earthquake Engineering Field Investigation Team (EEFIT) truth assessments. The combined overall classification accuracy for all six sites was 68.3%, with a standard deviation of 9.6%. Results were impacted by imperfect validation data, inclusion of non-building points, and very diverse environments, e.g., varying building types, sizes, and densities. Nevertheless, the processes exhibited significant potential for detecting buildings and assessing building-level damage
    • …
    corecore