58 research outputs found
Automatic Building Extraction From LIDAR Data Covering Complex Urban Scenes
This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground
height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points.
Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point
and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR
points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the
same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining
non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based
on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof
segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as
seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height,
or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are
extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection
line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied
to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual
building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that
the proposed method offers higher building detection and roof plane extraction rates than many existing methods, especially in complex
urban scenes
BUILDING CHANGE DETECTION FROM LIDAR POINT CLOUD DATA BASED ON CONNECTED COMPONENT ANALYSIS
Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary
for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated
to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building
change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order
to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed.
Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended
building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are
estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI)
has been developed to update detected changes to the existing building map. Experimental results show that the improved building
detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation
errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and
thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed
GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks
CLASSIFIER-FREE DETECTION OF POWER LINE PYLONS FROM POINT CLOUD DATA
High density airborne point cloud data has become an important means for modelling and maintenance of a power line corridor. Since,
the amount of data in a dense point cloud is huge even in a small area, an automatic detection of pylons in the corridor can be a
prerequisite for efficient and effective extraction of wires in a subsequent step. However, the existing solutions mostly overlook this
important requirement by processing the whole data into one go, which nonetheless will hinder their applications to large areas. This
paper presents a new pylon detection technique from point cloud data. First, the input point cloud is divided into ground and nonground
points. The non-ground points within a specific low height region are used to generate a pylon mask, where pylons are found
stand-alone, not connected with any wires. The candidate pylons are obtained using a connected component analysis in the mask,
followed by a removal of trees by comparing area, shape and symmetry properties of trees and pylons. Finally, the parallelism property
of wires with the line connecting pair of candidate pylons is exploited to remove trees that have the same area and shape properties as
pylons. Experimental results show that the proposed technique provides a high pylon detection rate in terms of completeness (100 %)
and correctness (100 %)
A NEW MASK FOR AUTOMATIC BUILDING DETECTION FROM HIGH DENSITY POINT CLOUD DATA AND MULTISPECTRAL IMAGERY
In complex urban and residential areas, there are buildings which are not only connected with and/or close to one another but also
partially occluded by their surrounding vegetation. Moreover, there may be buildings whose roofs are made of transparent materials.
In transparent buildings, there are point returns from both the ground (or materials inside the buildings) and the rooftop. These issues
confuse the previously proposed building masks which are generated from either ground points or non-ground points. The normalised
digital surface model (nDSM) is generated from the non-ground points and usually it is hard to find individual buildings and trees
using the nDSM. In contrast, the primary building mask is produced using the ground points, thereby it misses the transparent rooftops.
This paper proposes a new building mask based on the non-ground points. The dominant directions of non-ground lines extracted
from the multispectral imagery are estimated. A dummy grid with the target mask resolution is rotated at each dominant direction
to obtain the corresponding height values from the non-ground points. Three sub-masks are then generated from the height grid by
estimating the gradient function. Two of these sub-masks capture planar surfaces whose height remain constant in along and across
the dominant direction, respectively. The third sub-mask contains only the flat surfaces where the height (ideally) remains constant in
all directions. All the sub-masks generated in all estimated dominant directions are combined to produce the candidate building mask.
Although the application of the gradient function helps in removal of most of the vegetation, the final building mask is obtained through
removal of planar vegetation, if any, and tiny isolated false candidates. Experimental results on three Australian data sets show that
the proposed method can successfully remove vegetation, thereby separate buildings from occluding vegetation and detect buildings
with transparent roof materials. While compared to existing building detection techniques, the proposed technique offers higher objectbased
completeness, correctness and quality, specially in complex scenes with aforementioned issues. It is not only capable of detecting
transparent buildings, but also small garden sheds which are sometimes as small as 5 m2 in area
PIXEL-BASED LAND COVER CLASSIFICATION BY FUSING HYPERSPECTRAL AND LIDAR DATA
Land cover classification has many applications like forest management, urban planning, land use change identification and environment
change analysis. The passive sensing of hyperspectral systems can be effective in describing the phenomenology of the observed area
over hundreds of (narrow) spectral bands. On the other hand, the active sensing of LiDAR (Light Detection and Ranging) systems
can be exploited for characterising topographical information of the area. As a result, the joint use of hyperspectral and LiDAR
data provides a source of complementary information, which can greatly assist in the classification of complex classes. In this study,
we fuse hyperspectral and LiDAR data for land cover classification. We do a pixel-wise classification on a disjoint set of training
and testing samples for five different classes. We propose a new feature combination by fusing features from both hyperspectral
and LiDAR, which achieves competent classification accuracy with low feature dimension, while the existing method requires high
dimensional feature vector to achieve similar classification result. Also, for the reduction of the dimension of the feature vector,
Principal Component Analysis (PCA) is used as it captures the variance of the samples with a limited number of Principal Components
(PCs). We tested our classification method using PCA applied on hyperspectral bands only and combined hyperspectral and LiDAR
features. Classification with support vector machine (SVM) and decision tree shows that our feature combination achieves better
classification accuracy compared to the existing feature combination, while keeping the similar number of PCs. The experimental
results also show that decision tree performs better than SVM and requires less execution time
Automatic detection of residential buildings using LIDAR data and multispectral imagery
This paper presents an automatic building detection technique using LIDAR data and multispectral imagery. Two masks are obtained from the LIDAR data: a 'primary building mask' and a 'secondary building mask'. The primary building mask indicates the void areas where the laser does not reach below a certain height threshold. The secondary building mask indicates the filled areas, from where the laser reflects, above the same threshold. Line segments are extracted from around the void areas in the primary building mask. Line segments around trees are removed using the normalized difference vegetation index derived from the orthorectified multispectral images. The initial building positions are obtained based on the remaining line segments. The complete buildings are detected from their initial positions using the two masks and multispectral images in the YIQ colour system. It is experimentally shown that the proposed technique can successfully detect urban residential buildings, when assessed in terms of 15 indices including completeness, correctness and quality
Automatic Building Extraction From LIDAR Data Covering Complex Urban Scenes
This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground
height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points.
Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point
and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR
points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the
same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining
non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based
on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof
segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as
seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height,
or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are
extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection
line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied
to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual
building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that
the proposed method offers higher building detection and roof plane extraction rates than many existing methods, especially in complex
urban scenes
Rule-based segmentation of LIDAR point cloud for automatic extraction of building roof planes
This paper presents a new segmentation technique for LIDAR point cloud data for automatic extraction of building roof planes. Using
the ground height from a DEM (Digital Elevation Model), the raw LIDAR points are separated into two groups: ground and nonground
points. The ground points are used to generate a "building mask" in which the black areas represent the ground where there are
no laser returns below a certain height. The non-ground points are segmented to extract the planar roof segments. First, the building
mask is divided into small grid cells. The cells containing the black pixels are clustered such that each cluster represents an individual
building or tree. Second, the non-ground points within a cluster are segmented based on their coplanarity and neighbourhood relations.
Third, the planar segments are refined using a rule-based procedure that assigns the common points among the planar segments to the
appropriate segments. Finally, another rule-based procedure is applied to remove tree planes which are small in size and randomly
oriented. Experimental results on the Vaihingen data set show that the proposed method offers high building detection and roof plane
extraction rates
Automatic building detection using LIDAR data and multispectral imagery
An automatic building detection technique using LIDAR data and multispectral imagery has been proposed. Two masks are obtained from the LIDAR data: a 'primary building mask' and a 'secondary building mask'. The primary building mask indicates the void areas where the laser does not reach below a certain height threshold. The secondary building mask indicates the filled areas, from where the laser reflects, above the same threshold. Line segments are extracted from around the void areas in the primary building mask. Line segments around trees are removed using the normalized difference vegetation index derived from the orthorectified multispectral images. The initial building positions are obtained based on the remaining line segments. The complete buildings are detected from their initial positions using the two masks and multispectral images in the YIQ colour system. It is experimentally shown that the proposed technique can successfully detect buildings, when assessed in terms of 15 indices including completeness, correctness and quality
IMPROVED BUILDING DETECTION USING TEXTURE INFORMATION
The performance of automatic building detection techniques can be significantly impeded due to the presence of same-height objects,
for example, trees. Consequently, if a building detection technique cannot distinguish between trees and buildings, both its false positive
and false negative rates rise significantly. This paper presents an improved automatic building detection technique that achieves more
effective separation of buildings from trees. In addition to using traditional cues such as height, width and colour, the proposed improved
detector uses texture information from both LIDAR and orthoimagery. Firstly, image entropy and colour information are jointly applied
to remove easily distinguishable trees. Secondly, a voting procedure based on the neighbourhood information from both the image and
LIDAR data is employed for further exclusion of trees. Finally, a rule-based procedure using the edge orientation histogram from the
image is followed to eliminate false positive candidates. The improved detector has been tested on a number of scenes from three
different test areas and it is shown that the algorithm performs well in complex scenes
- …