4 research outputs found

    Evaluation of the effect of hydroseeded vegetation for slope reinforcement

    Get PDF
    A landslide is a significant environmental hazard that results in an enormous loss of lives and properties. Studies have revealed that rainfall, soil characteristics, and human errors, such as deforestation, are the leading causes of landslides, reducing soil water infiltration and increasing the water runoff of a slope. This paper introduces vegetation establishment as a low-cost, practical measure for slope reinforcement through the ground cover and the root of the vegetation. This study reveals the level of complexity of the terrain with regards to the evaluation of high and low stability areas and has produced a landslide susceptibility map. For this purpose, 12 conditioning factors, namely slope, aspect, elevation, curvature, hill shade, stream power index (SPI), topographic wetness index (TWI), terrain roughness index (TRI), distances to roads, distance to lakes, distance to trees, and build-up, were used through the analytic hierarchy process (AHP) model to produce landslide susceptibility map. Receiver operating characteristics (ROC) was used for validation of the results. The area under the curve (AUC) values obtained from the ROC method for the AHP model was 0.865. Four seed samples, namely ryegrass, rye corn, signal grass, and couch, were hydroseeded to determine the vegetation root and ground cover’s effectiveness on stabilization and reinforcement on a high-risk susceptible 65° slope between August and December 2020. The observed monthly vegetation root of couch grass gave the most acceptable result. With a spreading and creeping vegetation ground cover characteristic, ryegrass showed the most acceptable monthly result for vegetation ground cover effectiveness. The findings suggest that the selection of couch species over other species is justified based on landslide control benefits

    Automated Building Detection from Airborne LiDAR and Very High-Resolution Aerial Imagery with Deep Neural Network

    No full text
    The detection of buildings in the city is essential in several geospatial domains and for decision-making regarding intelligence for city planning, tax collection, project management, revenue generation, and smart cities, among other areas. In the past, the classical approach used for building detection was by using the imagery and it entailed human–computer interaction, which was a daunting proposition. To tackle this task, a novel network based on an end-to-end deep learning framework is proposed to detect and classify buildings features. The proposed CNN has three parallel stream channels: the first is the high-resolution aerial imagery, while the second stream is the digital surface model (DSM). The third was fixed on extracting deep features using the fusion of channel one and channel two, respectively. Furthermore, the channel has eight group convolution blocks of 2D convolution with three max-pooling layers. The proposed model’s efficiency and dependability were tested on three different categories of complex urban building structures in the study area. Then, morphological operations were applied to the extracted building footprints to increase the uniformity of the building boundaries and produce improved building perimeters. Thus, our approach bridges a significant gap in detecting building objects in diverse environments; the overall accuracy (OA) and kappa coefficient of the proposed method are greater than 80% and 0.605, respectively. The findings support the proposed framework and methodologies’ efficacy and effectiveness at extracting buildings from complex environments

    Building extraction for 3D city modelling using infused airborne LiDAR and high-resolution aerial photograph

    Get PDF
    Accurate and timely mapping of the urban building is crucial for proper planning for planners, managers, and even the government. Nevertheless, the urban environment is complex and heterogeneous, with different features such as buildings (houses), transportation, and vegetation. The extraction of urban features remains a challenge for planners and government due to the issues associated with the urban areas. In the past photogrammetric sensors were deployed. However, it was time-consuming, capital intensive and manual. The revolution of technology has made available Airborne light detection. The ranging sensor (LiDAR) has undeniably brought about detailed, speedy terrain mapping, although with the challenge of many weeks of building feature detection and modelling process due to its discriminate placement of elevation points on everything. It includes cars, houses, and trees. Hence, the focus of this thesis carried out urban building detection and, where possible, had minimal user intervention in its process. In the first instance, LiDAR derivatives were employed via an image algorithm to perform the detection of buildings. Our method achieved promising results over a large scene with completeness, correctness, and the quality matrix we have for the object-based evaluation average values were 97%, 99% and 99%, respectively. The second goal employs a deep learning(DL) algorithm to predict the best sensor for detection, either the LiDAR, optics or the fusion of the LiDAR and high-resolution aerial photography, to know which is most suitable for building detection with little or no user intervention. Whereas an acceptable range for good classifiers (TPR and TNR index) should be 100, none of those mentioned above was below the threshold of ninety. In contrast, we had 97%, 93%, and 91% for the pixel-based evaluation values, respectively, for the deep learning method. We tested on A1, A2, A3, and our discovery DSM had the highest accuracy compared to other sensors alone. For Area 1 (A1), a value of overall accuracy of 93.21%, with a kappa coefficient of 0.798. Also, the optics' overall accuracy value was 87.54%, and the kappa coefficient was 0.630. Whereas for the fusion, the overall and kappa coefficient here was A2(94.30%, 0.859).. in conclusion, the integration of LiDAR and Aerial photography outperformed all the optics and DSM. The weakness of the image and the LiDAR dataset has been compensated through their fusion. Moreover, the proposed model was evaluated on three building forms in different locations with different rooftops forms for this research; three forms of housing/building types were considered: the complex, high rise and single low detached apartment buildings only. The result was negligible over the study area by comparing LiDAR DEM heights and differential GPS. The.RMSE is 0.11 for the heterogeneous environment, and mixed building forms for high rise buildings form RMSE is 0.002 m for high rise buildings while for low residential apartments, our RMSE value Root means square error 0.003m. The studies show our models' capacity to improve urban building detection and automate building objects. It is an indicator of excellent performance. The proposed technique can help detect and solve urban building detection problem

    Automated Building Detection from Airborne LiDAR and Very High-Resolution Aerial Imagery with Deep Neural Network

    No full text
    The detection of buildings in the city is essential in several geospatial domains and for decision-making regarding intelligence for city planning, tax collection, project management, revenue generation, and smart cities, among other areas. In the past, the classical approach used for building detection was by using the imagery and it entailed human–computer interaction, which was a daunting proposition. To tackle this task, a novel network based on an end-to-end deep learning framework is proposed to detect and classify buildings features. The proposed CNN has three parallel stream channels: the first is the high-resolution aerial imagery, while the second stream is the digital surface model (DSM). The third was fixed on extracting deep features using the fusion of channel one and channel two, respectively. Furthermore, the channel has eight group convolution blocks of 2D convolution with three max-pooling layers. The proposed model’s efficiency and dependability were tested on three different categories of complex urban building structures in the study area. Then, morphological operations were applied to the extracted building footprints to increase the uniformity of the building boundaries and produce improved building perimeters. Thus, our approach bridges a significant gap in detecting building objects in diverse environments; the overall accuracy (OA) and kappa coefficient of the proposed method are greater than 80% and 0.605, respectively. The findings support the proposed framework and methodologies’ efficacy and effectiveness at extracting buildings from complex environments
    corecore