157 research outputs found

    Data Fusion in a Hierarchical Segmentation Context: The Case of Building Roof Description

    Get PDF
    Automatic mapping of urban areas from aerial images is a challenging task for scientists an

    Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    Get PDF
    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a divide-and-conquer scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected vertices to produce triangulated mesh models. These triangulated mesh models are suitable for many applications, such as 3D mapping, urban planning and augmented reality

    Automated Building Information Extraction and Evaluation from High-resolution Remotely Sensed Data

    Get PDF
    The two-dimensional (2D) footprints and three-dimensional (3D) structures of buildings are of great importance to city planning, natural disaster management, and virtual environmental simulation. As traditional manual methodologies for collecting 2D and 3D building information are often both time consuming and costly, automated methods are required for efficient large area mapping. It is challenging to extract building information from remotely sensed data, considering the complex nature of urban environments and their associated intricate building structures. Most 2D evaluation methods are focused on classification accuracy, while other dimensions of extraction accuracy are ignored. To assess 2D building extraction methods, a multi-criteria evaluation system has been designed. The proposed system consists of matched rate, shape similarity, and positional accuracy. Experimentation with four methods demonstrates that the proposed multi-criteria system is more comprehensive and effective, in comparison with traditional accuracy assessment metrics. Building height is critical for building 3D structure extraction. As data sources for height estimation, digital surface models (DSMs) that are derived from stereo images using existing software typically provide low accuracy results in terms of rooftop elevations. Therefore, a new image matching method is proposed by adding building footprint maps as constraints. Validation demonstrates that the proposed matching method can estimate building rooftop elevation with one third of the error encountered when using current commercial software. With an ideal input DSM, building height can be estimated by the elevation contrast inside and outside a building footprint. However, occlusions and shadows cause indistinct building edges in the DSMs generated from stereo images. Therefore, a “building-ground elevation difference model” (EDM) has been designed, which describes the trend of the elevation difference between a building and its neighbours, in order to find elevation values at bare ground. Experiments using this novel approach report that estimated building height with 1.5m residual, which out-performs conventional filtering methods. Finally, 3D buildings are digitally reconstructed and evaluated. Current 3D evaluation methods did not present the difference between 2D and 3D evaluation methods well; traditionally, wall accuracy is ignored. To address these problems, this thesis designs an evaluation system with three components: volume, surface, and point. As such, the resultant multi-criteria system provides an improved evaluation method for building reconstruction

    LOD Generation for Urban Scenes

    Get PDF
    International audienceWe introduce a novel approach that reconstructs 3D urban scenes in the form of levels of detail (LODs). Starting from raw data sets such as surface meshes generated by multi-view stereo systems, our algorithm proceeds in three main steps: classification, abstraction and reconstruction. From geometric attributes and a set of semantic rules combined with a Markov random field, we classify the scene into four meaningful classes. The abstraction step detects and regularizes planar structures on buildings, fits icons on trees, roofs and facades, and performs filtering and simplification for LOD generation. The abstracted data are then provided as input to the reconstruction step which generates watertight buildings through a min-cut formula-tion on a set of 3D arrangements. Our experiments on complex buildings and large scale urban scenes show that our approach generates meaningful LODs while being robust and scalable. By combining semantic segmentation and abstraction it also outperforms general mesh approximation ap-proaches at preserving urban structures

    An investigation into semi-automated 3D city modelling

    Get PDF
    Creating three dimensional digital representations of urban areas, also known as 3D city modelling, is essential in many applications, such as urban planning, radio frequency signal propagation, flight simulation and vehicle navigation, which are of increasing importance in modern society urban centres. The main aim of the thesis is the development of a semi-automated, innovative workflow for creating 3D city models using aerial photographs and LiDAR data collected from various airborne sensors. The complexity of this aim necessitates the development of an efficient and reliable way to progress from manually intensive operations to an increased level of automation. The proposed methodology exploits the combination of different datasets, also known as data fusion, to achieve reliable results in different study areas. Data fusion techniques are used to combine linear features, extracted from aerial photographs, with either LiDAR data or any other source available including Very Dense Digital Surface Models (VDDSMs). The research proposes a method which employs a semi automated technique for 3D city modelling by fusing LiDAR if available or VDDSMs with 3D linear features extracted from stereo pairs of photographs. The building detection and the generation of the building footprint is performed with the use of a plane fitting algorithm on the LiDAR or VDDSMs using conditions based on the slope of the roofs and the minimum size of the buildings. The initial building footprint is subsequently generalized using a simplification algorithm that enhances the orthogonality between the individual linear segments within a defined tolerance. The final refinement of the building outline is performed for each linear segment using the filtered stereo matched points with a least squares estimation. The digital reconstruction of the roof shapes is performed by implementing a least squares-plane fitting algorithm on the classified VDDSMs, which is restricted by the building outlines, the minimum size of the planes and the maximum height tolerance between adjacent 3D points. Subsequently neighbouring planes are merged using Boolean operations for generation of solid features. The results indicate very detailed building models. Various roof details such as dormers and chimneys are successfully reconstructed in most cases

    A methodology to produce geographical information for land planning using very-high resolution images

    Get PDF
    Actualmente, os municípios são obrigados a produzir, no âmbito da elaboração dos instrumentos de gestão territorial, cartografia homologada pela autoridade nacional. O Plano Director Municipal (PDM) tem um período de vigência de 10 anos. Porém, no que diz respeito à cartografia para estes planos, principalmente em municípios onde a pressão urbanística é elevada, esta periodicidade não é compatível com a dinâmica de alteração de uso do solo. Emerge assim, a necessidade de um processo de produção mais eficaz, que permita a obtenção de uma nova cartografia de base e temática mais frequentemente. Em Portugal recorre-se à fotografia aérea como informação de base para a produção de cartografia de grande escala. Por um lado, embora este suporte de informação resulte em mapas bastante rigorosos e detalhados, a sua produção têm custos muito elevados e consomem muito tempo. As imagens de satélite de muito alta-resolução espacial podem constituir uma alternativa, mas sem substituir as fotografias aéreas na produção de cartografia temática, a grande escala. O tema da tese trata assim da satisfação das necessidades municipais em informação geográfica actualizada. Para melhor conhecer o valor e utilidade desta informação, realizou-se um inquérito aos municípios Portugueses. Este passo foi essencial para avaliar a pertinência e a utilidade da introdução de imagens de satélite de muito alta-resolução espacial na cadeia de procedimentos de actualização de alguns temas, quer na cartografia de base quer na cartografia temática. A abordagem proposta para solução do problema identificado baseia-se no uso de imagens de satélite e outros dados digitais em ambiente de Sistemas de Informação Geográfica. A experimentação teve como objectivo a extracção automática de elementos de interesse municipal a partir de imagens de muito alta-resolução espacial (fotografias aéreas ortorectificadas, imagem QuickBird, e imagem IKONOS), bem como de dados altimétricos (dados LiDAR). Avaliaram-se as potencialidades da informação geográfica extraídas das imagens para fins cartográficos e analíticos. Desenvolveram-se quatro casos de estudo que reflectem diferentes usos para os dados geográficos a nível municipal, e que traduzem aplicações com exigências diferentes. No primeiro caso de estudo, propõe-se uma metodologia para actualização periódica de cartografia a grande escala, que faz uso de fotografias aéreas vi ortorectificadas na área da Alta de Lisboa. Esta é uma aplicação quantitativa onde as qualidades posicionais e geométricas dos elementos extraídos são mais exigentes. No segundo caso de estudo, criou-se um sistema de alarme para áreas potencialmente alteradas, com recurso a uma imagem QuickBird e dados LiDAR, no Bairro da Madre de Deus, com objectivo de auxiliar a actualização de cartografia de grande escala. No terceiro caso de estudo avaliou-se o potencial solar de topos de edifícios nas Avenidas Novas, com recurso a dados LiDAR. No quarto caso de estudo, propõe-se uma série de indicadores municipais de monitorização territorial, obtidos pelo processamento de uma imagem IKONOS que cobre toda a área do concelho de Lisboa. Esta é uma aplicação com fins analíticos onde a qualidade temática da extracção é mais relevante.Currently, the Portuguese municipalities are required to produce homologated cartography, under the Territorial Management Instruments framework. The Municipal Master Plan (PDM) has to be revised every 10 years, as well as the topographic and thematic maps that describe the municipal territory. However, this period is inadequate for representing counties where urban pressure is high, and where the changes in the land use are very dynamic. Consequently, emerges the need for a more efficient mapping process, allowing obtaining recent geographic information more often. Several countries, including Portugal, continue to use aerial photography for large-scale mapping. Although this data enables highly accurate maps, its acquisition and visual interpretation are very costly and time consuming. Very-High Resolution (VHR) satellite imagery can be an alternative data source, without replacing the aerial images, for producing large-scale thematic cartography. The focus of the thesis is the demand for updated geographic information in the land planning process. To better understand the value and usefulness of this information, a survey of all Portuguese municipalities was carried out. This step was essential for assessing the relevance and usefulness of the introduction of VHR satellite imagery in the chain of procedures for updating land information. The proposed methodology is based on the use of VHR satellite imagery, and other digital data, in a Geographic Information Systems (GIS) environment. Different algorithms for feature extraction that take into account the variation in texture, color and shape of objects in the image, were tested. The trials aimed for automatic extraction of features of municipal interest, based on aerial and satellite high-resolution (orthophotos, QuickBird and IKONOS imagery) as well as elevation data (altimetric information and LiDAR data). To evaluate the potential of geographic information extracted from VHR images, two areas of application were identified: mapping and analytical purposes. Four case studies that reflect different uses of geographic data at the municipal level, with different accuracy requirements, were considered. The first case study presents a methodology for periodic updating of large-scale maps based on orthophotos, in the area of Alta de Lisboa. This is a situation where the positional and geometric accuracy of the extracted information are more demanding, since technical mapping standards must be complied. In the second case study, an alarm system that indicates the location of potential changes in building areas, using a QuickBird image and LiDAR data, was developed for the area of Bairro da Madre de Deus. The goal of the system is to assist the updating of large scale mapping, providing a layer that can be used by the municipal technicians as the basis for manual editing. In the third case study, the analysis of the most suitable roof-tops for installing solar systems, using LiDAR data, was performed in the area of Avenidas Novas. A set of urban environment indicators obtained from VHR imagery is presented. The concept is demonstrated for the entire city of Lisbon, through IKONOS imagery processing. In this analytical application, the positional quality issue of extraction is less relevant.GEOSAT – Methodologies to extract large scale GEOgraphical information from very high resolution SATellite images (PTDC/GEO/64826/2006), e-GEO – Centro de Estudos de Geografia e Planeamento Regional, da Faculdade de Ciências Sociais e Humanas, no quadro do Grupo de Investigação Modelação Geográfica, Cidades e Ordenamento do Territóri

    An investigation into semi-automated 3D city modelling

    Get PDF
    Creating three dimensional digital representations of urban areas, also known as 3D city modelling, is essential in many applications, such as urban planning, radio frequency signal propagation, flight simulation and vehicle navigation, which are of increasing importance in modern society urban centres. The main aim of the thesis is the development of a semi-automated, innovative workflow for creating 3D city models using aerial photographs and LiDAR data collected from various airborne sensors. The complexity of this aim necessitates the development of an efficient and reliable way to progress from manually intensive operations to an increased level of automation. The proposed methodology exploits the combination of different datasets, also known as data fusion, to achieve reliable results in different study areas. Data fusion techniques are used to combine linear features, extracted from aerial photographs, with either LiDAR data or any other source available including Very Dense Digital Surface Models (VDDSMs). The research proposes a method which employs a semi automated technique for 3D city modelling by fusing LiDAR if available or VDDSMs with 3D linear features extracted from stereo pairs of photographs. The building detection and the generation of the building footprint is performed with the use of a plane fitting algorithm on the LiDAR or VDDSMs using conditions based on the slope of the roofs and the minimum size of the buildings. The initial building footprint is subsequently generalized using a simplification algorithm that enhances the orthogonality between the individual linear segments within a defined tolerance. The final refinement of the building outline is performed for each linear segment using the filtered stereo matched points with a least squares estimation. The digital reconstruction of the roof shapes is performed by implementing a least squares-plane fitting algorithm on the classified VDDSMs, which is restricted by the building outlines, the minimum size of the planes and the maximum height tolerance between adjacent 3D points. Subsequently neighbouring planes are merged using Boolean operations for generation of solid features. The results indicate very detailed building models. Various roof details such as dormers and chimneys are successfully reconstructed in most cases

    Semi-Automated DIRSIG scene modeling from 3D lidar and passive imagery

    Get PDF
    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes ”on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery. Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery. These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used include multiple-return point information provided by an Optech lidar linescanning sensor, multispectral frame array imagery from the Wildfire Airborne Sensor Program (WASP) and WASP-lite sensors, and hyperspectral data from the Modular Imaging Spectrometer Instrument (MISI) and the COMPact Airborne Spectral Sensor (COMPASS). Information from these image sources was fused and processed using the semi-automated approach to provide the DIRSIG input files used to define a synthetic scene. When compared to the standard manual process for creating these files, we achieved approximately a tenfold increase in speed, as well as a significant increase in geometric accuracy

    Semantic Segmentation of 3D Textured Meshes for Urban Scene Analysis

    Get PDF
    International audienceClassifying 3D measurement data has become a core problem in photogram-metry and 3D computer vision, since the rise of modern multiview geometry techniques, combined with affordable range sensors. We introduce a Markov Random Field-based approach for segmenting textured meshes generated via multi-view stereo into urban classes of interest. The input mesh is first partitioned into small clusters, referred to as superfacets, from which geometric and photometric features are computed. A random forest is then trained to predict the class of each superfacet as well as its similarity with the neighboring superfacets. Similarity is used to assign the weights of the Markov Random Field pairwise-potential and accounts for contextual information between the classes. The experimental results illustrate the efficacy and accuracy of the proposed framework
    corecore