1,003 research outputs found

    AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming

    Full text link
    The combination of aerial survey capabilities of Unmanned Aerial Vehicles with targeted intervention abilities of agricultural Unmanned Ground Vehicles can significantly improve the effectiveness of robotic systems applied to precision agriculture. In this context, building and updating a common map of the field is an essential but challenging task. The maps built using robots of different types show differences in size, resolution and scale, the associated geolocation data may be inaccurate and biased, while the repetitiveness of both visual appearance and geometric structures found within agricultural contexts render classical map merging techniques ineffective. In this paper we propose AgriColMap, a novel map registration pipeline that leverages a grid-based multimodal environment representation which includes a vegetation index map and a Digital Surface Model. We cast the data association problem between maps built from UAVs and UGVs as a multimodal, large displacement dense optical flow estimation. The dominant, coherent flows, selected using a voting scheme, are used as point-to-point correspondences to infer a preliminary non-rigid alignment between the maps. A final refinement is then performed, by exploiting only meaningful parts of the registered maps. We evaluate our system using real world data for 3 fields with different crop species. The results show that our method outperforms several state of the art map registration and matching techniques by a large margin, and has a higher tolerance to large initial misalignments. We release an implementation of the proposed approach along with the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201

    Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry

    Get PDF
    Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.European Commission 1381202-GEU PYC20-RE-005-UJA IEG-2021Junta de Andalucia 1381202-GEU PYC20-RE-005-UJA IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU FPU19/0010

    3D Classification of Power Line Scene Using Airborne Lidar Data

    Get PDF
    Failure to adequately maintain vegetation within a power line corridor has been identified as a main cause of the August 14, 2003 electric power blackout. Such that, timely and accurate corridor mapping and monitoring are indispensible to mitigate such disaster. Moreover, airborne LiDAR (Light Detection And Ranging) has been recently introduced and widely utilized in industries and academies thanks to its potential to automate the data processing for scene analysis including power line corridor mapping. However, todayโ€™s corridor mapping practice using LiDAR in industries still remains an expensive manual process that is not suitable for the large-scale, rapid commercial compilation of corridor maps. Additionally, in academies only few studies have developed algorithms capable of recognizing corridor objects in the power line scene, which are mostly based on 2-dimensional classification. Thus, the objective of this dissertation is to develop a 3-dimensional classification system which is able to automatically identify key objects in the power line corridor from large-scale LiDAR data. This dissertation introduces new features for power structures, especially for the electric pylon, and existing features which are derived through diverse piecewise (i.e., point, line and plane) feature extraction, and then constructs a classification model pool by building individual models according to the piecewise feature sets and diverse voltage training samples using Random Forests. Finally, this dissertation proposes a Multiple Classifier System (MCS) which provides an optimal committee of models from the model pool for classification of new incoming power line scene. The proposed MCS has been tested on a power line corridor where medium voltage transmission lines (115 kV and 230 kV) pass. The classification results based on the MCS applied by optimally selecting the pre-built classification models according to the voltage type of the test corridor demonstrate a good accuracy (89.07%) and computationally effective time cost (approximately 4 hours/km) without additional training fees

    Extracting Physical and Environmental Information of Irish Roads Using Airborne and Mobile Sensors

    Get PDF
    Airborne sensors including LiDAR and digital cameras are now used extensively for capturing topographical information as these are often more economical and efficient as compared to the traditional photogrammetric and land surveying techniques. Data captured using airborne sensors can be used to extract 3D information important for, inter alia, city modelling, land use classification and urban planning. According to the EU noise directive (2002/49/EC), the National Road Authority (NRA) in Ireland is responsible for generating noise models for all roads which are used by more than 8,000 vehicles per day. Accordingly, the NRA has to cover approximately 4,000 km of road, 500m on each side. These noise models have to be updated every 5 years. Important inputs to noise model are digital terrain model (DTM), 3D building data, road width, road centre line, ground surface type and noise barriers. The objective of this research was to extract these objects and topographical information using nationally available datasets acquired from the Ordnance Survey of Ireland (OSI). The OSI uses ALS50-II LiDAR and ADS40 digital sensors for capturing ground information. Both sensors rely on direct georeferencing, minimizing the need for ground control points. Before exploiting the complementary nature of both datasets for information extraction, their planimetric and vertical accuracies were evaluated using independent ground control points. A new method was also developed for registration in case of any mismatch. DSMs from LiDAR and aerial images were used to find common points to determine the parameters of 2D conformal transformation. The developed method was also evaluated by the EuroSDR in a project which involved a number of partners. These measures were taken to ensure that the inputs to the noise model were of acceptable accuracy as recommended in the report (Assessment of Exposure to Noise, 2006) by the European Working Group. A combination of image classification techniques was used to extract information by the fusion of LiDAR and aerial images. The developed method has two phases, viz. object classification and object reconstruction. Buildings and vegetation were classified based on Normalized Difference Vegetation Index (NDVI) and a normalized digital surface model (nDSM). Holes in building segments were filled by object-oriented multiresolution segmentation. Vegetation that remained amongst buildings was classified using cues obtained from LiDAR. The short comings there in were overcome by developing an additional classification cue using multiple returns. The building extents were extracted and assigned a single height value generated from LiDAR nDSM. The extracted height was verified against the ground truth data acquired using terrestrial survey techniques. Vegetation was further classified into three categories, viz. trees, hedges and tree clusters based on shape parameter (for hedges) and distance from neighbouring trees (for clusters). The ground was classified into three surface types i.e. roads and parking area, exposed surface and grass. This was done using LiDAR intensity, NDVI and nDSM. Mobile Laser Scanning (MLS) data was used to extract walls and purpose built noise barriers, since these objects were not extractable from the available airborne sensor data. Principal Component Analysis (PCA) was used to filter points belonging to such objects. A line was then fitted to these points using robust least square fitting. The developed object extraction method was tested objectively in two independent areas namely the Test Area-1 and the Test Area-2. The results were thoroughly investigated by three different accuracy assessment methods using the OSI vector data. The acceptance of any developed method for commercial applications requires completeness and correctness values of 85% and 70% respectively. Accuracy measures obtained using the developed method of object extraction recommend its applicability for noise modellin

    ๋Œ€ํ˜• ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ •์„ ์œ„ํ•œ UAS์™€ TLS ๊ธฐ๋ฐ˜ ๊ณต๊ฐ„์ •๋ณด ๊ตฌ์ถ•๊ธฐ๋ฒ• ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :ํ™˜๊ฒฝ๋Œ€ํ•™์› ํ˜‘๋™๊ณผ์ • ์กฐ๊ฒฝํ•™,2019. 8. ์ด๋™๊ทผ.๋Œ€ํ˜•์žฌ๋‚œ ๋ฐœ์ƒ์— ๋Œ€ํ•œ ์‚ฌ์ „์˜ˆ๋ฐฉ๋ถ€ํ„ฐ ๋Œ€์‘๋‹จ๊ณ„๊นŒ์ง€ ์ „๊ณผ์ •์˜ ์ฒด๊ณ„์ ์ด๊ณ  ํšจ์œจ์ ์ธ ๋Œ€์ฒ˜๋ฅผ ํ†ตํ•ด ์ธ๋ช…, ์žฌ์‚ฐ, ํ™˜๊ฒฝ ๋“ฑ์˜ ํ”ผํ•ด๋ฅผ ์ตœ์†Œํ™”ํ•˜์—ฌ์•ผ ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ๋Œ€ํ˜•์žฌ๋‚œ ๋ฐœ์ƒ ์‹œ ๋Œ€์‘ ๊ณผ์ • ์ค‘ ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ •์— ์ง‘์ค‘ํ•˜์—ฌ ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ๋Œ€ํ˜•ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ •์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๋Š” ๊ณผ๊ฑฐ๋ถ€ํ„ฐ ์ˆ˜ํ–‰๋˜๊ณ  ์žˆ์ง€๋งŒ ์‹ค์งˆ์ ์ธ ์ธก์ •์ด ์–ด๋ ต๊ธฐ ๋•Œ๋ฌธ์— ๋ฐœ์ƒ ์ด์ „์˜ ์ •๋ณด๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ชจ๋ธ๋ง, ์›๊ฒฉํƒ์‚ฌ ๋“ฑ์˜ ๊ธฐ์ˆ ์„ ์ด์šฉํ•˜์—ฌ ํ๊ธฐ๋ฌผ๋Ÿ‰์„ ์˜ˆ์ธกํ•˜๋Š” ์—ฐ๊ตฌ๊ฐ€ ๋‹ค์ˆ˜ ์ˆ˜ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ตœ๊ทผ ํ™œ๋ฐœํ•˜๊ฒŒ ์ด์šฉ๋˜๊ณ  ์žˆ๋Š” UAS (Unmanned Aerial System)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ๊ธฐ๋ฌผ๋Ÿ‰์„ ์‚ฐ์ •ํ•˜๊ณ  ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๋ฉฐ ๊ธฐ์กด ๊ธฐ์ˆ ๊ณผ์˜ ๋น„๊ต์™€ ๋ถ„์„์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. UAS๋Š” UAV (Unmanned Aerial Vehicle)๋ฅผ ์ด์šฉํ•˜์—ฌ ์˜์ƒ์„ ์ทจ๋“ํ•˜๊ณ  ๋ถ„์„ํ•˜๋Š” ์ „๋ฐ˜์ ์ธ ๊ณผ์ •์ด๋ผ๊ณ  ๋ณผ ์ˆ˜ ์žˆ๋‹ค. UAS๋ฅผ ์ด์šฉํ•˜์—ฌ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๊ตฌ์ถ•ํ•˜๊ณ  ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ์—ฐ๊ตฌ๊ฐ€ ๊ณผ๊ฑฐ๋ถ€ํ„ฐ ์ฃผ๋กœ ์ˆ˜ํ–‰๋˜๊ณ  ์žˆ์œผ๋ฉฐ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์— ์ ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ์ด์™€ ์œ ์‚ฌํ•˜๊ฒŒ TLS (Terrestrial Laser Scanning)๋ฅผ ์ด์šฉํ•˜์—ฌ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ๋Š”๋ฐ ์ธก๋Ÿ‰ ๋ถ„์•ผ์—์„œ ์ฃผ๋กœ ์ด์šฉ๋˜๊ณ  ์žˆ์œผ๋ฉฐ ๊ทธ ์ •ํ™•์„ฑ ๋˜ํ•œ ์šฐ์ˆ˜ํ•˜์—ฌ ์‹์ƒ, ๊ฑด์ถ•, ํ† ๋ชฉ, ๋ฌธํ™”์žฌ, ์ง€ํ˜•์ธก๋Ÿ‰ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์—์„œ ๋„๋ฆฌ ์ด์šฉ๋˜๊ณ  ์žˆ๋‹ค. ๋Œ€ํ˜•ํ๊ธฐ๋ฌผ๋Ÿ‰ ๋˜ํ•œ TLS๋ฅผ ์ด์šฉํ•˜์—ฌ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด ๊ตฌ์ถ• ํ›„ ์‚ฐ์ •ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ ๋น„์šฉ, ์‹œ๊ฐ„ ๋“ฑ์˜ ์ œ์•ฝ์‚ฌํ•ญ์œผ๋กœ ์ธํ•ด ํ™œ์šฉ์ด ๋ถˆ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ํฌ๊ฒŒ 3๊ฐ€์ง€ ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” UAS๋ฅผ ์ด์šฉํ•œ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด ๊ตฌ์ถ•๊ณผ ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ • ๊ฐ€๋Šฅ์„ฑ ๋ชจ์ƒ‰์ด๋‹ค. UAS๋ฅผ ์ด์šฉํ•˜์—ฌ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด ๊ตฌ์ถ•๊นŒ์ง€์˜ ๊ณผ์ •์„ ์ •๋ฐ€ ๋ถ„์„ํ•˜์—ฌ ์ตœ์ ์˜ ๋น„ํ–‰๋ณ€์ˆ˜์™€ ๊ธฐํƒ€ ๋ณ€์ˆ˜๋ฅผ ๋„์ถœํ•˜์—ฌ ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ •์˜ ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด๊ณ ์ž ํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ๋Š” TLS ๊ธฐ์ˆ ๊ณผ UAS ๊ธฐ์ˆ  ๊ธฐ๋ฐ˜์˜ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด์˜ ๋น„๊ต์™€ ๋ถ„์„์ด๋‹ค. ๊ฐ๊ฐ์˜ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด๋ฅผ M3C2์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•˜์—ฌ ๋น„๊ตํ•˜๊ณ  ๋ถ„์„ํ•˜์—ฌ ์ตœ์ ์˜ ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ • ๊ธฐ๋ฒ•์„ ๋„์ถœํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ์„ธ ๋ฒˆ์งธ๋Š” 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด์˜ ์œตํ•ฉ๊ณผ ํšจ์œจ์„ฑ ๋ถ„์„์ด๋‹ค. ๋‘ ๊ฐ€์ง€ ๊ธฐ์ˆ ์„ ์œตํ•ฉํ•˜์—ฌ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๊ตฌ์ถ•ํ•˜๊ณ  ํšจ์œจ์„ฑ์„ ๋ถ„์„ํ•˜์—ฌ UAS, TLS, ์œตํ•ฉ๊ธฐ๋ฒ• ์„ธ๊ฐ€์ง€ ๋ฐฉ๋ฒ•๋ก ๊ฐ„์˜ ์ฐจ์ด์™€ ์ตœ์ ์˜ ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ • ๊ธฐ๋ฒ•์„ ๋„์ถœํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์ฃผ์š” ๋น„ํ–‰๋ณ€์ˆ˜๋Š” ๋น„ํ–‰๊ณ ๋„์™€ ์˜์ƒ์˜ ์ค‘๋ณต๋„์ด๋ฉฐ ์ด์™ธ ๋ณ€์ˆ˜๋Š” ์ง€์ƒ๊ธฐ์ค€์  ๊ฐœ์ˆ˜์ด๋‹ค. ์ด ์™ธ์—๋„ ์นด๋ฉ”๋ผ ๋‚ด๋ถ€ํ‘œ์ •, ์ง๋ฒŒ์˜ ํ”๋“ค๋ฆผ ์ •๋„๋ฅผ ๋ถ„์„ํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด 56๊ฐœ์˜ ์ผ€์ด์Šค ์ค‘ ์ตœ์ ์˜ ๋ณ€์ˆ˜๋ฅผ ๋„์ถœํ•˜์˜€์œผ๋ฉฐ ๊ณผ๊ฑฐ ์—ฐ๊ตฌ์™€๋Š” ๋‹ค๋ฅด๊ฒŒ ๊ณ ๋„์ฐจ์ด๊ฐ€ ๋งŽ์ด ๋‚˜๋Š” ํ๊ธฐ๋ฌผ ์ง€์—ญ์—์„œ๋Š” DW (Distance covered on the ground by on image in Width direction)์— ์˜ํ•ด ๊ฒฐ๊ณผ๊ฐ€ ๋„์ถœ๋˜์—ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊ณ ๋„๊ฐ€ ๋‚ฎ์„์ˆ˜๋ก ๋†’์€ ์ •ํ™•๋„๋ฅผ ๊ฐ€์ง€๋Š” 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๊ตฌ์ถ•ํ•˜์ง€๋งŒ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๊ณ ๋„๊ฐ€ ๋‚ฎ์„์ˆ˜๋ก ์ •ํ™•๋„๊ฐ€ ๋‚ฎ์•„์ง€๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. 56๊ฐœ์˜ ์ผ€์ด์Šค ๋ชจ๋‘ ์ •ํ™•๋„ ๋ถ„์„์„ ์‹ค์‹œํ•˜์˜€์œผ๋ฉฐ ์ •ํ™•๋„์™€ ํ๊ธฐ๋ฌผ๋Ÿ‰๊ฐ„์˜ ์ƒ๊ด€์„ฑ์ด ์žˆ์Œ์„ ๋„์ถœํ•˜์˜€๋‹ค. 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด์˜ ์ •ํ™•๋„๊ฐ€ ๋†’์„์ˆ˜๋ก ์‚ฐ์ •ํ•œ ํ๊ธฐ๋ฌผ๋Ÿ‰์ด ์œ ์‚ฌํ–ˆ์œผ๋ฉฐ ์ด์™€ ๋ฐ˜๋Œ€๋กœ ์ •ํ™•๋„๊ฐ€ ๋‚ฎ์€ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด๋“ค์—์„œ๋Š” ํ๊ธฐ๋ฌผ๋Ÿ‰์ด ์ œ๊ฐ๊ฐ์œผ๋กœ ๋‚˜ํƒ€๋‚˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ผ๋ จ์˜ ๊ณผ์ •์„ ํ†ตํ•ด ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ •์„ ์œ„ํ•œ UAS ์ตœ์  ๋ณ€์ˆ˜๋ฅผ ๋„์ถœํ•˜์˜€์œผ๋ฉฐ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด ๊ธฐ๋ฐ˜์˜ ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ • ๊ฐ€๋Šฅ์„ฑ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. M3C2์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•˜์—ฌ UAS์™€ TLS ๊ธฐ๋ฐ˜์˜ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๋น„๊ตํ•˜์˜€์œผ๋ฉฐ ์ด๋ฅผ ํ†ตํ•ด, ๊ฐ๊ฐ์˜ ๊ณต๊ฐ„์ •๋ณด๊ฐ€ ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ์žฅ๋‹จ์ ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ •ํ™•๋„์˜ ๊ฒฝ์šฐ, UAS๊ธฐ๋ฐ˜ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด์˜ RMSE๋Š” 0.032m, TLS์˜ RMSE๋Š” 0.202m๋กœ UAS์˜ ์ •ํ™•๋„๊ฐ€ ๋” ๋†’์€ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ๋‘ ๊ฐ€์ง€ ๊ธฐ์ˆ ์„ ์œตํ•ฉํ•œ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด์˜ RMSE๋Š” 0.030m๋กœ์จ ์„ธ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•๋ก  ์ค‘์—์„œ ๊ฐ€์žฅ ๋†’์€ ์ •ํ™•๋„๋ฅผ ๋ณด์˜€๋‹ค. ํ•˜์ง€๋งŒ ํšจ์œจ์„ฑ ๊ด€์ ์—์„œ ๋ถ„์„ํ•œ ๊ฒฐ๊ณผ, UAS ๊ธฐ๋ฐ˜์˜ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด๊ฐ€ ๋‹จ์‹œ๊ฐ„์— ๋†’์€ ์ •ํ™•๋„๋ฅผ ๋ณด์ด๋Š” ๊ฒฐ๊ณผ๋กœ ๋„์ถœ๋จ์œผ๋กœ์จ ๋Œ€ํ˜•ํ๊ธฐ๋ฌผ๋Ÿ‰ ์‚ฐ์ •์— ์ตœ์ ํ™”๋œ ๊ธฐ์ˆ ๊ณผ ๋ฐฉ๋ฒ•๋ก ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ๊ฒƒ์œผ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ด ์™ธ์—๋„ ๋น„์šฉ์„ ๋ถ„์„ํ•œ ๊ฒฐ๊ณผ, UAS ๊ธฐ๋ฐ˜์˜ 3์ฐจ์› ๋ชจํ˜• ๊ตฌ์ถ•๊นŒ์ง€ ์†Œ๋น„๋œ ๋น„์šฉ์ด TLS์— ๋น„ํ•ด ์ ์€ ๋น„์šฉ์ด ์†Œ๋น„๋œ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋Œ€ํ˜•์žฌ๋‚œ ์‹œ ๋น„๊ต์  ๋‹จ์‹œ๊ฐ„์— ๋Œ€์‘ํ•˜์—ฌ ํ”ผํ•ด๋ฅผ ์ตœ์†Œํ™” ํ•˜๊ณ  ๋‹ค์–‘ํ•œ ์˜์‚ฌ๊ฒฐ์ •์„ ์ง„ํ–‰ํ•ด์•ผ ํ•˜๋Š”๋ฐ, ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ๋„์ถœํ•œ UAS ๊ธฐ๋ฐ˜์˜ 3์ฐจ์› ๊ณต๊ฐ„์ •๋ณด ๊ตฌ์ถ• ๊ธฐ๋ฒ•์€ ๋Œ€ํ˜• ํ๊ธฐ๋ฌผ๋Ÿ‰์‚ฐ์ •๊ณผ ๊ณต๊ฐ„์  ์˜์‚ฌ๊ฒฐ์ •์— ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์„ ๊ธฐ๋Œ€ํ•œ๋‹ค.Damage to people, property, and the environment must be minimized through systematic and efficient handling of large-scale disasters throughout the entire process from prevention to the response stage. This study focused on the waste quantity calculations that are part of the response process during large-scale disasters. Studies on large-scale waste quantity calculations have been performed in the past, but actual measurements are difficult. Therefore, many studies are being performed on using information from previous instances to perform modeling and using technologies such as remote sensing to estimate waste quantities. This study calculated waste quantities based on UAS (unmanned aerial system), which is a technology that is often used these days. It evaluated the accuracy of this technology, and it analyzed and compared the technology with existing technologies. UAS can be seen as an overall process of using UAVs (Unmanned Aerial Vehicle) to capture images and analyzing them. Studies have been conducted in the past on using UAS to build 3D spatial information and evaluate accuracy, and they are being used integrally in a variety of fields. Similarly, 3D spatial information can be built using TLS (Terrestrial Laser Scanning), and these are chiefly used in the surveying field. This methods accuracy is excellent, and it is widely used in a variety of fields such as vegetation, construction, civil engineering, cultural assets, and topographical surveys. Large-scale waste can also be calculated by using TLS to build a 3D spatial information, but it is seen as unfeasible to use due to cost and time limitations. This study is broadly divided into 3 parts. The first part is examining the feasibility of using UAS to build a 3D spatial information and calculate waste quantity. The process up to the point of using UAS to build a 3D spatial information was analyzed in detail, and optimal flight variables and other variables were found in order to examine the feasibility of calculating waste quantity. The second part is comparing and analyzing 3D spatial information based on TLS and UAS technology. The 3D spatial information were compared and analyzed using the M3C2 algorithm, and the optimal waste quantity calculation methods were found. Finally, the third part is analyzing a combination of the 3D spatial information and the 3D spatial information efficiency. The two technologies were combined to build a 3D spatial information, and their efficiency was analyzed to find the differences between the three methodologies (UAS, TLS, and the combined method), as well as find the optimal waste quantity calculation method. The major flight variables are the flight altitude and image overlap. Another variable is the number of ground control points. In addition to this, the camera interior orientation and degree of gimbal shaking were analyzed. Through this study, the optimal variables among 56 cases were found. Unlike past studies, it was discovered that the results were contrary to previous studies due to the DW (Distance covered on the ground by on image in Width direction) in waste regions with a lot of altitude differences. Normally, as the altitude becomes lower, the accuracy of the 3D spatial information becomes higher, but in this study it was found that the accuracy became lower as the altitude became lower. The accuracy of all 56 cases was analyzed, and it was found that there is a correlation between accuracy and the amount of waste. As the accuracy of the 3D spatial information increased, the calculated waste amounts became similar. Conversely, in 3D spatial information with low accuracy, it was found that the waste amounts were different. Through this sequential process, the optimal UAS variables for calculating waste amounts were found, and it was possible to confirm the feasibility of calculating waste amounts based on 3D spatial iformation. The M3C2 algorithm was used to compare the UAS and TLS-based 3D spatial information, and by doing so, it was possible to confirm the advantages and disadvantages of each model. As for accuracy, the RMSE of the UAS-based 3D spatial information was 0.032 m, and the RMSE of the TLS model was 0.202, making the UAS models accuracy higher. The RMSE of the 3D spatial information which combined the two technologies was 0.030 m, and it showed the highest accuracy of the three methodologies. However, in terms of efficiency, the analyzed results were able to confirm that the UAS-based 3D spatial information had the optimal technology and methodology for large-scale waste amount calculations by creating a model which shows high accuracy in a short time. In addition, cost analysis results were able to confirm that the cost of building the UAS-based 3D spatial information was lower than that of TLS. During large-scale disasters, it is necessary to respond in a relatively short time to minimize damage and perform a variety of decision-making. The UAS-based 3D spatial information building method found in this study can be used for large-scale waste amount calculations and spatial decision-making.I. Introduction 1 II. Literature Review 7 1. Studies on Applying the UAS to Disaster Management 7 2. Accuracy of UAS-based 3D Model Construction 14 3. Disaster Waste Quantity 26 III. Materials and Methods 34 1. Optimal Flight Parameters for UAV Generating 3D Spatial Information 36 1.1. Design of UAV Flight 36 1.2. Photogrammetric Processing for the Acquisition of 3D Spatial Information 41 1.3. Assessment of the 3D Spatial Information Accuracy 43 1.4. Computation of the Amount of Waste 45 2. Comparison and Analysis of TLS and UAS Methodology for Optimal Volume Computation 47 2.1. TLS and UAS-based 3D Spatial Information Generation and Volume Computation 49 2.2. Comparison and Analysis of 3D Spatial Information 55 3. Multispace Fusion Methodology-based 3D Spatial Information Generating and Efficiency Analysis 57 3.1. Multispace Fusion Methodology-based 3D Spatial Information 57 3.2. Efficiency Analysis of 3D Spatial Information for Responding to Large-scale Disasters 58 III. Result and Discussion 59 1. Optimal Flight Parameters for UAV Generating 3D Spatial Information and Investigation of Feasibility 59 1.1. Generation of 3D Spatial Information using UAS 59 1.2. Assessment of the 3D Spatial Information Accuracy 64 1.3. Computation of the Amount of Waste and Optimal flights parameters 76 2. Comparison and Analysis of TLS and UAS-based 3D Spatial Information 84 2.1. Generation of 3D Spatial Information and Volume Computation using UAS 84 2.2. Spatial Comparison and Analysis 88 3. Multispace Fusion Methodology-based 3D Spatial Information Generating and Efficiency Analysis 93 3.1. Multispace Fusion Methodology-based 3D Spatial Information 93 3.2. 3D Spatial information Efficiency Analysis for Responding to Large-scale Disasters 96 IV. Conclusion 100 V. Bibliography 103Docto

    Classification and information structure of the Terrestrial Laser Scanner: methodology for analyzing the registered data of Vila Vella, historic center of Tossa de Mar

    Get PDF
    This paper presents a methodology for an architectural survey, based on the Terrestrial Laser Scanning technology TLS, not as a simple measurement and representation work, but with the purpose understanding the projects being studied, starting from the analysis, as a process of distinction and separation of the parts of a whole, in order to know their principles or elements. As a case study we start from the Vila Vella recording, conducted by the Cityโ€™s Virtual Modeling Laboratory in 2008, being taken up from the start, in relation to the registration, georeferencing, filtering and handling. Aimed at a later stage of decomposition and composition of data, in terms of floor plan and facades, using semiautomatic classification techniques, for the detection of vegetation as well as the relationship of the planes of the surfaces, leading to reorganize the information from 3D data to 2D and 2.5D, considering information management, as well as the characteristics of the case study presented, in the development of methods for the construction and exploitation of new databases, to be exploited by the Geographic Information Systems and Remote Sensing.Peer Reviewe

    Classification and information structure of the Terrestrial Laser Scanner: methodology for analyzing the registered data of Vila Vella, historic center of Tossa de Mar

    Get PDF
    This paper presents a methodology for an architectural survey, based on the Terrestrial Laser Scanning technology TLS, not as a simple measurement and representation work, but with the purpose understanding the projects being studied, starting from the analysis, as a process of distinction and separation of the parts of a whole, in order to know their principles or elements. As a case study we start from the Vila Vella recording, conducted by the Cityโ€™s Virtual Modeling Laboratory in 2008, being taken up from the start, in relation to the registration, georeferencing, filtering and handling. Aimed at a later stage of decomposition and composition of data, in terms of floor plan and facades, using semiautomatic classification techniques, for the detection of vegetation as well as the relationship of the planes of the surfaces, leading to reorganize the information from 3D data to 2D and 2.5D, considering information management, as well as the characteristics of the case study presented, in the development of methods for the construction and exploitation of new databases, to be exploited by the Geographic Information Systems and Remote Sensing.Peer Reviewe
    • โ€ฆ
    corecore