513 research outputs found

    Enhancment of dense urban digital surface models from VHR optical satellite stereo data by pre-segmentation and object detection

    Get PDF
    The generation of digital surface models (DSM) of urban areas from very high resolution (VHR) stereo satellite imagery requires advanced methods. In the classical approach of DSM generation from stereo satellite imagery, interest points are extracted and correlated between the stereo mates using an area based matching followed by a least-squares sub-pixel refinement step. After a region growing the 3D point list is triangulated to the resulting DSM. In urban areas this approach fails due to the size of the correlation window, which smoothes out the usual steep edges of buildings. Also missing correlations as for partly – in one or both of the images – occluded areas will simply be interpolated in the triangulation step. So an urban DSM generated with the classical approach results in a very smooth DSM with missing steep walls, narrow streets and courtyards. To overcome these problems algorithms from computer vision are introduced and adopted to satellite imagery. These algorithms do not work using local optimisation like the area-based matching but try to optimize a (semi-)global cost function. Analysis shows that dynamic programming approaches based on epipolar images like dynamic line warping or semiglobal matching yield the best results according to accuracy and processing time. These algorithms can also detect occlusions – areas not visible in one or both of the stereo images. Beside these also the time and memory consuming step of handling and triangulating large point lists can be omitted due to the direct operation on epipolar images and direct generation of a so called disparity image fitting exactly on the first of the stereo images. This disparity image – representing already a sort of a dense DSM – contains the distances measured in pixels in the epipolar direction (or a no-data value for a detected occlusion) for each pixel in the image. Despite the global optimization of the cost function many outliers, mismatches and erroneously detected occlusions remain, especially if only one stereo pair is available. To enhance these dense DSM – the disparity image – a pre-segmentation approach is presented in this paper. Since the disparity image is fitting exactly on the first of the two stereo partners (beforehand transformed to epipolar geometry) a direct correlation between image pixels and derived heights (the disparities) exist. This feature of the disparity image is exploited to integrate additional knowledge from the image into the DSM. This is done by segmenting the stereo image, transferring the segmentation information to the DSM and performing a statistical analysis on each of the created DSM segments. Based on this analysis and spectral information a coarse object detection and classification can be performed and in turn the DSM can be enhanced. After the description of the proposed method some results are shown and discussed

    Object-based Urban Building Footprint Extraction and 3D Building Reconstruction from Airborne LiDAR Data

    Get PDF
    Buildings play an essential role in urban intra-construction, urban planning, climate studies and disaster management. The precise knowledge of buildings not only serves as a primary source for interpreting complex urban characteristics, but also provides decision makers with more realistic and multidimensional scenarios for urban management. In this thesis, the 2D extraction and 3D reconstruction methods are proposed to map and visualize urban buildings. Chapter 2 presents an object-based method for extraction of building footprints using LiDAR derived NDTI (Normalized Difference Tree Index) and intensity data. The overall accuracy of 94.0% and commission error of 6.3% in building extraction is achieved with the Kappa of 0.84. Chapter 3 presents a GIS-based 3D building reconstruction method. The results indicate that the method is effective for generating 3D building models. The 91.4% completeness of roof plane identification is achieved, and the overall accuracy of the flat and pitched roof plane classification is 88.81%, with the user’s accuracy of the flat roof plane 97.75% and pitched roof plane 100%

    Towards Automated Analysis of Urban Infrastructure after Natural Disasters using Remote Sensing

    Get PDF
    Natural disasters, such as earthquakes and hurricanes, are an unpreventable component of the complex and changing environment we live in. Continued research and advancement in disaster mitigation through prediction of and preparation for impacts have undoubtedly saved many lives and prevented significant amounts of damage, but it is inevitable that some events will cause destruction and loss of life due to their sheer magnitude and proximity to built-up areas. Consequently, development of effective and efficient disaster response methodologies is a research topic of great interest. A successful emergency response is dependent on a comprehensive understanding of the scenario at hand. It is crucial to assess the state of the infrastructure and transportation network, so that resources can be allocated efficiently. Obstructions to the roadways are one of the biggest inhibitors to effective emergency response. To this end, airborne and satellite remote sensing platforms have been used extensively to collect overhead imagery and other types of data in the event of a natural disaster. The ability of these platforms to rapidly probe large areas is ideal in a situation where a timely response could result in saving lives. Typically, imagery is delivered to emergency management officials who then visually inspect it to determine where roads are obstructed and buildings have collapsed. Manual interpretation of imagery is a slow process and is limited by the quality of the imagery and what the human eye can perceive. In order to overcome the time and resource limitations of manual interpretation, this dissertation inves- tigated the feasibility of performing fully automated post-disaster analysis of roadways and buildings using airborne remote sensing data. First, a novel algorithm for detecting roadway debris piles from airborne light detection and ranging (lidar) point clouds and estimating their volumes is presented. Next, a method for detecting roadway flooding in aerial imagery and estimating the depth of the water using digital elevation models (DEMs) is introduced. Finally, a technique for assessing building damage from airborne lidar point clouds is presented. All three methods are demonstrated using remotely sensed data that were collected in the wake of recent natural disasters. The research presented in this dissertation builds a case for the use of automatic, algorithmic analysis of road networks and buildings after a disaster. By reducing the latency between the disaster and the delivery of damage maps needed to make executive decisions about resource allocation and performing search and rescue missions, significant loss reductions could be achieved

    Automatic extraction of urban structures based on shadow information from satellite imagery

    Get PDF
    The geometric visualisation of the buildings as the 3D solid structures can provide a comprehensive vision in terms of the assessment and simulation of solar exposed surfaces, which includes rooftops and facades. However, the main issue in the simulation a genuine data source that presents the real characteristics of buildings. This research aims to extract the 3D model as the solid boxes of urban structures automatically from Quickbird satellite image with 0.6 m GSD for assessing the solar energy potential. The results illustrate that the 3D model of building presents spatial visualisation of solar radiation for the entire building surface in a different direction

    MODELING OF ROOFS FROM POINT CLOUDS USING GENETIC ALGORITHMS

    Get PDF
    Building roof extraction has been studied for more than thirty years and it generates models that provide important information for many applications, especially urban planning. The present work aimed to model roofs only from point clouds using genetic algorithms (GAs) to develop a more automatized and efficient method. For this, firstly, an algorithm for edge detection was developed. Experiments were performed with simulated and real point clouds, obtained by LIDAR. In the experiments with simulated point clouds, three types of point clouds with different complexities were created, and the effects of noise and scan line spacing on the results were evaluated. For the experiments with real point clouds, five roofs were chosen as examples, each with a different characteristic. GAs were used to select, among the points identified during edge detection, the so-called ‘significant points’, those which are essential to the accurate reconstruction of the roof model. These points were then used to generate the models, which were assessed qualitatively and quantitatively. Such evaluations showed that the use of GAs proved to be efficient for the modeling of roofs, as the model geometry was satisfactory, the error was within an acceptable range, and the computational effort was clearly reduced

    Extracting Physical and Environmental Information of Irish Roads Using Airborne and Mobile Sensors

    Get PDF
    Airborne sensors including LiDAR and digital cameras are now used extensively for capturing topographical information as these are often more economical and efficient as compared to the traditional photogrammetric and land surveying techniques. Data captured using airborne sensors can be used to extract 3D information important for, inter alia, city modelling, land use classification and urban planning. According to the EU noise directive (2002/49/EC), the National Road Authority (NRA) in Ireland is responsible for generating noise models for all roads which are used by more than 8,000 vehicles per day. Accordingly, the NRA has to cover approximately 4,000 km of road, 500m on each side. These noise models have to be updated every 5 years. Important inputs to noise model are digital terrain model (DTM), 3D building data, road width, road centre line, ground surface type and noise barriers. The objective of this research was to extract these objects and topographical information using nationally available datasets acquired from the Ordnance Survey of Ireland (OSI). The OSI uses ALS50-II LiDAR and ADS40 digital sensors for capturing ground information. Both sensors rely on direct georeferencing, minimizing the need for ground control points. Before exploiting the complementary nature of both datasets for information extraction, their planimetric and vertical accuracies were evaluated using independent ground control points. A new method was also developed for registration in case of any mismatch. DSMs from LiDAR and aerial images were used to find common points to determine the parameters of 2D conformal transformation. The developed method was also evaluated by the EuroSDR in a project which involved a number of partners. These measures were taken to ensure that the inputs to the noise model were of acceptable accuracy as recommended in the report (Assessment of Exposure to Noise, 2006) by the European Working Group. A combination of image classification techniques was used to extract information by the fusion of LiDAR and aerial images. The developed method has two phases, viz. object classification and object reconstruction. Buildings and vegetation were classified based on Normalized Difference Vegetation Index (NDVI) and a normalized digital surface model (nDSM). Holes in building segments were filled by object-oriented multiresolution segmentation. Vegetation that remained amongst buildings was classified using cues obtained from LiDAR. The short comings there in were overcome by developing an additional classification cue using multiple returns. The building extents were extracted and assigned a single height value generated from LiDAR nDSM. The extracted height was verified against the ground truth data acquired using terrestrial survey techniques. Vegetation was further classified into three categories, viz. trees, hedges and tree clusters based on shape parameter (for hedges) and distance from neighbouring trees (for clusters). The ground was classified into three surface types i.e. roads and parking area, exposed surface and grass. This was done using LiDAR intensity, NDVI and nDSM. Mobile Laser Scanning (MLS) data was used to extract walls and purpose built noise barriers, since these objects were not extractable from the available airborne sensor data. Principal Component Analysis (PCA) was used to filter points belonging to such objects. A line was then fitted to these points using robust least square fitting. The developed object extraction method was tested objectively in two independent areas namely the Test Area-1 and the Test Area-2. The results were thoroughly investigated by three different accuracy assessment methods using the OSI vector data. The acceptance of any developed method for commercial applications requires completeness and correctness values of 85% and 70% respectively. Accuracy measures obtained using the developed method of object extraction recommend its applicability for noise modellin

    Building extraction for 3D city modelling using airborne laser scanning data and high-resolution aerial photo

    Get PDF
    Light detection and ranging (LiDAR) technology has become a standard tool for three-dimensional mapping because it offers fast rate of data acquisition with unprecedented level of accuracy. This study presents an approach to accurately extract and model building in three-dimensional space from airborne laser scanning data acquired over Universiti Putra Malaysia in 2015. First, the point cloud was classified into ground and non-ground xyz points. The ground points was used to generate digital terrain model (DTM) while digital surface model (DSM) was  produced from the entire point cloud. From DSM and DTM, we obtained normalise DSM (nDSM) representing the height of features above the terrain surface.  Thereafter, the DSM, DTM, nDSM, laser intensity image and orthophoto were  combined as a single data file by layer stacking. After integrating the data, it was segmented into image objects using Object Based Image Analysis (OBIA) and subsequently, the resulting image object classified into four land cover classes: building, road, waterbody and pavement. Assessment of the classification accuracy produced overall accuracy and Kappa coefficient of 94.02% and 0.88 respectively. Then the extracted building footprints from the building class were further processed to generate 3D model. The model provides 3D visual perception of the spatial pattern of the buildings which is useful for simulating disaster scenario for  emergency management

    Airborne LiDAR and high resolution satellite data for rapid 3D feature extraction

    Get PDF
    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary?.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM. The CHM or the normalized DSM represents the absolute height of all aboveground urban features relative to the ground. After normalization, the elevation value of a point indicates the height from the ground to the point. The above-ground points were used for tree feature and building footprint extraction. In individual tree extraction, first and last return point clouds were used along with the bare earth and building footprint models discussed above. In this study, scene dependent extraction criteria were employed to improve the 3D feature extraction process. LiDAR-based refining/ filtering techniques used for bare earth layer extraction were crucial for improving the subsequent 3D features (tree and building) feature extraction. The PAN-sharpened WV-2 image (with 0.5 m spatial resolution) was used to assess the accuracy of LiDAR-based 3D feature extraction. Our analysis provided an accuracy of 98% for tree feature extraction and 96% for building feature extraction from LiDAR data. This study could extract total of 15143 tree features using CHM method, out of which total of 14841 were visually interpreted on PAN-sharpened WV-2 image data. The extracted tree features included both shadowed (total 13830) and non-shadowed (total 1011). We note that CHM method could overestimate total of 302 tree features, which were not observed on the WV-2 image. One of the potential sources for tree feature overestimation was observed in case of those tree features which were adjacent to buildings. In case of building feature extraction, the algorithm could extract total of 6117 building features which were interpreted on WV-2 image, even capturing buildings under the trees (total 605) and buildings under shadow (total 112). Overestimation of tree and building features was observed to be limiting factor in 3D feature extraction process. This is due to the incorrect filtering of point cloud in these areas. One of the potential sources of overestimation was the man-made structures, including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup

    CLUSTERING OF MULTISPECTRAL AIRBORNE LASER SCANNING DATA USING GAUSSIAN DECOMPOSITION

    Get PDF
    • …
    corecore