36 research outputs found

    Generation of Horizontally Curved Driving Lines for Autonomous Vehicles Using Mobile Laser Scanning Data

    Get PDF
    The development of autonomous vehicle desiderates tremendous advances in three-dimensional (3D) high-definition roadmaps. These roadmaps are capable of providing 3D positioning information with 10-to-20 cm accuracy. With the assistance of 3D high-definition roadmaps, the intractable autonomous driving problem is transformed into a solvable localization issue. The Mobile Laser Scanning (MLS) systems can collect accurate, high-density 3D point clouds in road environments for generating 3D high-definition roadmaps. However, few studies have been concentrated on the driving line generation from 3D MLS point clouds for highly autonomous driving, particularly for accident-prone horizontal curves with the problems of ambiguous traffic situations and unclear visual clues. This thesis attempts to develop an effective method for semi-automated generation of horizontally curved driving lines using MLS data. The framework of research methodology proposed in this thesis consists of three steps, including road surface extraction, road marking extraction, and driving line generation. Firstly, the points covering road surface are extracted using curb-based road surface extraction algorithms depending on both the elevation and slope differences. Then, road markings are identified and extracted according to a sequence of algorithms consisting of geo-referenced intensity image generation, multi-threshold road marking extraction, and statistical outlier removal. Finally, the conditional Euclidean clustering algorithm is employed followed by the nonlinear least-squares curve-fitting algorithm for generating horizontally curved driving lines. A total of six test datasets obtained in Xiamen, China by a RIEGL VMX-450 system were used to evaluate the performance and efficiency of the proposed methodology. The experimental results demonstrate that the proposed road marking extraction algorithms can achieve 90.89% in recall, 93.04% in precision and 91.95% in F1-score, respectively. Moreover, the unmanned aerial vehicle (UAV) imagery with 4 cm was used for validation of the proposed driving line generation algorithms. The validation results demonstrate that the horizontally curved driving lines can be effectively generated within 15 cm-level localization accuracy using MLS point clouds. Finally, a comparative study was conducted both visually and quantitatively to indicate the accuracy and reliability of the generated driving lines

    Merging digital surface models sourced from multi-satellite imagery and their consequent application in automating 3D building modelling

    Get PDF
    Recently, especially within the last two decades, the demand for DSMs (Digital Surface Models) and 3D city models has increased dramatically. This has arisen due to the emergence of new applications beyond construction or analysis and consequently to a focus on accuracy and the cost. This thesis addresses two linked subjects: first improving the quality of the DSM by merging different source DSMs using a Bayesian approach; and second, extracting building footprints using approaches, including Bayesian approaches, and producing 3D models. Regarding the first topic, a probabilistic model has been generated based on the Bayesian approach in order to merge different source DSMs from different sensors. The Bayesian approach is specified to be ideal in the case when the data is limited and this can consequently be compensated by introducing the a priori. The implemented prior is based on the hypothesis that the building roof outlines are specified to be smooth, for that reason local entropy has been implemented in order to infer the a priori data. In addition to the a priori estimation, the quality of the DSMs is obtained by using field checkpoints from differential GNSS. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the Maximum Likelihood model which showed similar quantitative statistical results and better qualitative results. Perhaps it is worth mentioning that, although the DSMs used in the merging have been produced using satellite images, the model can be applied on any type of DSM. The second topic is building footprint extraction based on using satellite imagery. An efficient flow-line for automatic building footprint extraction and 3D model construction, from both stereo panchromatic and multispectral satellite imagery was developed. This flow-line has been applied in an area of different building types, with both hipped and sloped roofs. The flow line consisted of multi stages. First, data preparation, digital orthoimagery and DSMs are created from WorldView-1. Pleiades imagery is used to create a vegetation mask. The orthoimagery then undergoes binary classification into โ€˜foregroundโ€™ (including buildings, shadows, open-water, roads and trees) and โ€˜backgroundโ€™ (including grass, bare soil, and clay). From the foreground class, shadows and open water are removed after creating a shadow mask by thresholding the same orthoimagery. Likewise roads have been removed, for the time being, after interactively creating a mask using the orthoimagery. NDVI processing of the Pleiades imagery has been used to create a mask for removing the trees. An โ€˜edge mapโ€™ is produced using Canny edge detection to define the exact building boundary outlines, from enhanced orthoimagery. A normalised digital surface model (nDSM) is produced from the original DSM using smoothing and subtracting techniques. Second, start Building Detection and Extraction. Buildings can be detected, in part, in the nDSM as isolated relatively elevated โ€˜blobsโ€™. These nDSM โ€˜blobsโ€™ are uniquely labelled to identify rudimentary buildings. Each โ€˜blobโ€™ is paired with its corresponding โ€˜foregroundโ€™ area from the orthoimagery. Each โ€˜foregroundโ€™ area is used as an initial building boundary, which is then vectorised and simplified. Some unnecessary details in the โ€˜edge mapโ€™, particularly on the roofs of the buildings can be removed using mathematical morphology. Some building edges are not detected in the โ€˜edge mapโ€™ due to low contrast in some parts of the orthoimagery. The โ€˜edge mapโ€™ is subsequently further improved also using mathematical morphology, leading to the โ€˜modified edge mapโ€™. Finally, A Bayesian approach is used to find the most probable coordinates of the building footprints, based on the โ€˜modified edge mapโ€™. The proposal that is made for the footprint a priori data is based on the creating a PDF which assumes that the probable footprint angle at the corner is 90o and along the edge is 180o, with a less probable value given to the other angles such as 45o and 135o. The 3D model is constructed by extracting the elevation of the buildings from the DSM and combining it with the regularized building boundary. Validation, both quantitatively and qualitatively has shown that the developed process and associated algorithms have successfully been able to extract building footprints and create 3D models

    Remote Sensing for Land Administration 2.0

    Get PDF
    The reprint โ€œLand Administration 2.0โ€ is an extension of the previous reprint โ€œRemote Sensing for Land Administrationโ€, another Special Issue in Remote Sensing. This reprint unpacks the responsible use and integration of emerging remote sensing techniques into the domain of land administration, including land registration, cadastre, land use planning, land valuation, land taxation, and land development. The title was chosen as โ€œLand Administration 2.0โ€ in reference to both this Special Issue being the second volume on the topic โ€œLand Administrationโ€ and the next-generation requirements of land administration including demands for 3D, indoor, underground, real-time, high-accuracy, lower-cost, and interoperable land data and information

    CNN๊ธฐ๋ฐ˜์˜ FusionNet ์‹ ๊ฒฝ๋ง๊ณผ ๋†์ง€ ๊ฒฝ๊ณ„์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ํ† ์ง€ํ”ผ๋ณต๋ถ„๋ฅ˜๋ชจ๋ธ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๋†์—…์ƒ๋ช…๊ณผํ•™๋Œ€ํ•™ ์ƒํƒœ์กฐ๊ฒฝ.์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™๋ถ€(์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™์ „๊ณต), 2021. 2. ์†ก์ธํ™.ํ† ์ง€์ด์šฉ์ด ๋น ๋ฅด๊ฒŒ ๋ณ€ํ™”ํ•จ์— ๋”ฐ๋ผ, ํ† ์ง€ ํ”ผ๋ณต์— ๋Œ€ํ•œ ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๋‹ด๊ณ  ์žˆ๋Š” ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ์‹ ์†ํ•œ ์ตœ์‹ ํ™”๋Š” ํ•„์ˆ˜์ ์ด๋‹ค. ํ•˜์ง€๋งŒ, ํ˜„ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„๋Š” ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ๋…ธ๋™๋ ฅ์„ ์š”๊ตฌํ•˜๋Š” manual digitizing ๋ฐฉ๋ฒ•์œผ๋กœ ์ œ์ž‘๋จ์— ๋”ฐ๋ผ, ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ์—…๋ฐ์ดํŠธ ๋ฐ ๋ฐฐํฌ์— ๊ธด ์‹œ๊ฐ„ ๊ฐ„๊ฒฉ์ด ๋ฐœ์ƒํ•˜๋Š” ์‹ค์ •์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” convolutional neural network (CNN) ๊ธฐ๋ฐ˜์˜ ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•˜์—ฌ high-resolution remote sensing (HRRS) ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜๊ณ , ํŠนํžˆ ๋†์ง€ ๊ฒฝ๊ณ„์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜์—ฌ ๋†์—…์ง€์—ญ์—์„œ ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ๊ฐœ์„ ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ์ „์ฒ˜๋ฆฌ(pre-processing) ๋ชจ๋“ˆ, ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜(land cover classification) ๋ชจ๋“ˆ, ๊ทธ๋ฆฌ๊ณ  ํ›„์ฒ˜๋ฆฌ(post-processing) ๋ชจ๋“ˆ์˜ ์„ธ ๋ชจ๋“ˆ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ์ „์ฒ˜๋ฆฌ ๋ชจ๋“ˆ์€ ์ž…๋ ฅ๋œ HRRS ์˜์ƒ์„ 75%์”ฉ ์ค‘์ฒฉ ๋ถ„ํ• ํ•˜์—ฌ ๊ด€์ ์„ ๋‹ค์–‘ํ™”ํ•˜๋Š” ๋ชจ๋“ˆ๋กœ, ํ•œ ๊ด€์ ์—์„œ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ„๋ฅ˜ํ•  ๋•Œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์˜ค๋ถ„๋ฅ˜๋ฅผ ์ค„์ด๊ณ ์ž ํ•˜์˜€๋‹ค. ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ๋ชจ๋“ˆ์€ FusionNet model ๊ตฌ์กฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๊ฐœ๋ฐœ๋˜์—ˆ๊ณ , ์ด๋Š” ๋ถ„ํ• ๋œ HRRS ์ด๋ฏธ์ง€์˜ ํ”ฝ์…€๋ณ„๋กœ ์ตœ์  ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ€์—ฌํ•˜๋„๋ก ์„ค๊ณ„๋˜์—ˆ๋‹ค. ํ›„์ฒ˜๋ฆฌ ๋ชจ๋“ˆ์€ ํ”ฝ์…€๋ณ„ ์ตœ์ข… ํ† ์ง€ ํ”ผ๋ณต์„ ๊ฒฐ์ •ํ•˜๋Š” ๋ชจ๋“ˆ๋กœ, ๋ถ„ํ• ๋œ HRRS ์ด๋ฏธ์ง€์˜ ๋ถ„๋ฅ˜๊ฒฐ๊ณผ๋ฅผ ์ทจํ•ฉํ•˜์—ฌ ์ตœ๋นˆ๊ฐ’์„ ์ตœ์ข… ํ† ์ง€ ํ”ผ๋ณต์œผ๋กœ ๊ฒฐ์ •ํ•œ๋‹ค. ์ถ”๊ฐ€๋กœ ๋†์ง€์—์„œ๋Š” ๋†์ง€๊ฒฝ๊ณ„๋ฅผ ์ถ”์ถœํ•˜๊ณ , ํ•„์ง€๋ณ„ ๋ถ„๋ฅ˜๋œ ํ† ์ง€ ํ”ผ๋ณต์„ ์ง‘๊ณ„ํ•˜์—ฌ ํ•œ ํ•„์ง€์— ๊ฐ™์€ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ€์—ฌํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ์ „๋ผ๋‚จ๋„ ์ง€์—ญ(๋ฉด์ : 547 km2)์˜ 2018๋…„ ์ •์‚ฌ์˜์ƒ๊ณผ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•™์Šต๋˜์—ˆ๋‹ค. ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ ๊ฒ€์ฆ์€ ํ•™์Šต์ง€์—ญ๊ณผ ์‹œ๊ฐ„, ๊ณต๊ฐ„์ ์œผ๋กœ ๊ตฌ๋ถ„๋œ, 2018๋…„ ์ „๋ผ๋‚จ๋„ ์ˆ˜๋ถ๋ฉด๊ณผ 2016๋…„ ์ถฉ์ฒญ๋ถ๋„ ๋Œ€์†Œ๋ฉด์˜ ๋‘ ๊ฒ€์ฆ์ง€์—ญ์—์„œ ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ๊ฐ ๊ฒ€์ฆ์ง€์—ญ์—์„œ overall accuracy๋Š” 0.81, 0.71๋กœ ์ง‘๊ณ„๋˜์—ˆ๊ณ , kappa coefficients๋Š” 0.75, 0.64๋กœ ์‚ฐ์ •๋˜์–ด substantial ์ˆ˜์ค€์˜ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ํ™•์ธํ•˜์˜€๋‹ค. ํŠนํžˆ, ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ์€ ํ•„์ง€ ๊ฒฝ๊ณ„๋ฅผ ๊ณ ๋ คํ•œ ๋†์—…์ง€์—ญ์—์„œ overall accuracy 0.89, kappa coefficient 0.81๋กœ almost perfect ์ˆ˜์ค€์˜ ์šฐ์ˆ˜ํ•œ ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ๋ณด์˜€๋‹ค. ์ด์— ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ํŠนํžˆ ๋†์—…์ง€์—ญ์—์„œ ํ˜„ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•˜์—ฌ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•œ ์ตœ์‹ ํ™”์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋œ๋‹ค.The rapid update of land cover maps is necessary because spatial information of land cover is widely used in various areas. However, these maps have been released or updated in the interval of several years primarily owing to the manual digitizing method, which is time-consuming and labor-intensive. This study was aimed to develop a land cover classification model using the concept of a convolutional neural network (CNN) that classifies land cover labels from high-resolution remote sensing (HRRS) images and to increase the classification accuracy in agricultural areas using the parcel boundary extraction algorithm. The developed model comprises three modules, namely the pre-processing, land cover classification, and post-processing modules. The pre-processing module diversifies the perspective of the HRRS images by separating images with 75% overlaps to reduce the misclassification that can occur in a single image. The land cover classification module was designed based on the FusionNet model structure, and the optimal land cover type was assigned for each pixel of the separated HRRS images. The post-processing module determines the ultimate land cover types for each pixel unit by summing up the several-perspective classification results and aggregating the pixel-classification result for the parcel-boundary unit in agricultural areas. The developed model was trained with land cover maps and orthographic images (area: 547 km2) from the Jeonnam province in Korea. Model validation was conducted with two spatially and temporally different sites including Subuk-myeon of Jeonnam province in 2018 and Daseo-myeon of Chungbuk province in 2016. In the respective validation sites, the models overall accuracies were 0.81 and 0.71, and kappa coefficients were 0.75 and 0.64, implying substantial model performance. The model performance was particularly better when considering parcel boundaries in agricultural areas, exhibiting an overall accuracy of 0.89 and kappa coefficient 0.81 (almost perfect). It was concluded that the developed model may help perform rapid and accurate land cover updates especially for agricultural areas.Chapter 1. Introduction 1 1.1. Study background 1 1.2. Objective of thesis 4 Chapter 2. Literature review 6 2.1. Development of remote sensing technique 6 2.2. Land cover segmentation 9 2.3. Land boundary extraction 13 Chapter 3. Development of the land cover classification model 15 3.1. Conceptual structure of the land cover classification model 15 3.2. Pre-processing module 16 3.3. CNN based land cover classification module 17 3.4. Post processing module 22 3.4.1 Determination of land cover in a pixel unit 22 3.4.2 Aggregation of land cover to parcel boundary 24 Chapter 4. Verification of the land cover classification model 30 4.1. Study area and data acquisition 31 4.1.1. Training area 31 4.1.2. Verification area 32 4.1.3. Data acquisition 33 4.2. Training the land cover classification model 36 4.3. Verification method 37 4.3.1. The performance measurement methods of land cover classification model 37 4.3.2. Accuracy estimation methods of agricultural parcel boundary 39 4.3.3. Comparison of boundary based classification result with ERDAS Imagine 41 4.4. Verification of land cover classification model 42 4.4.1. Performance of land cover classification at the child subcategory 42 4.4.2. Classification accuracy of the aggregated land cover to main category 46 4.4.3. Classification accuracy of boundary based aggregation in agricultural area 57 Chapter 5. Conclusions 71 Reference 73 ๊ตญ ๋ฌธ ์ดˆ ๋ก 83Maste

    Automatic road network extraction from high resolution satellite imagery using spectral classification methods

    Get PDF
    Road networks play an important role in a number of geospatial applications, such as cartographic, infrastructure planning and traffic routing software. Automatic and semi-automatic road network extraction techniques have significantly increased the extraction rate of road networks. Automated processes still yield some erroneous and incomplete results and costly human intervention is still required to evaluate results and correct errors. With the aim of improving the accuracy of road extraction systems, three objectives are defined in this thesis: Firstly, the study seeks to develop a flexible semi-automated road extraction system, capable of extracting roads from QuickBird satellite imagery. The second objective is to integrate a variety of algorithms within the road network extraction system. The benefits of using each of these algorithms within the proposed road extraction system, is illustrated. Finally, a fully automated system is proposed by incorporating a number of the algorithms investigated throughout the thesis. CopyrightDissertation (MSc)--University of Pretoria, 2010.Computer Scienceunrestricte

    Toward Global Localization of Unmanned Aircraft Systems using Overhead Image Registration with Deep Learning Convolutional Neural Networks

    Get PDF
    Global localization, in which an unmanned aircraft system (UAS) estimates its unknown current location without access to its take-off location or other locational data from its flight path, is a challenging problem. This research brings together aspects from the remote sensing, geoinformatics, and machine learning disciplines by framing the global localization problem as a geospatial image registration problem in which overhead aerial and satellite imagery serve as a proxy for UAS imagery. A literature review is conducted covering the use of deep learning convolutional neural networks (DLCNN) with global localization and other related geospatial imagery applications. Differences between geospatial imagery taken from the overhead perspective and terrestrial imagery are discussed, as well as difficulties in using geospatial overhead imagery for image registration due to a lack of suitable machine learning datasets. Geospatial analysis is conducted to identify suitable areas for future UAS imagery collection. One of these areas, Jerusalem northeast (JNE) is selected as the area of interest (AOI) for this research. Multi-modal, multi-temporal, and multi-resolution geospatial overhead imagery is aggregated from a variety of publicly available sources and processed to create a controlled image dataset called Jerusalem northeast rural controlled imagery (JNE RCI). JNE RCI is tested with handcrafted feature-based methods SURF and SIFT and a non-handcrafted feature-based pre-trained fine-tuned VGG-16 DLCNN on coarse-grained image registration. Both handcrafted and non-handcrafted feature based methods had difficulty with the coarse-grained registration process. The format of JNE RCI is determined to be unsuitable for the coarse-grained registration process with DLCNNs and the process to create a new supervised machine learning dataset, Jerusalem northeast machine learning (JNE ML) is covered in detail. A multi-resolution grid based approach is used, where each grid cell ID is treated as the supervised training label for that respective resolution. Pre-trained fine-tuned VGG-16 DLCNNs, two custom architecture two-channel DLCNNs, and a custom chain DLCNN are trained on JNE ML for each spatial resolution of subimages in the dataset. All DLCNNs used could more accurately coarsely register the JNE ML subimages compared to the pre-trained fine-tuned VGG-16 DLCNN on JNE RCI. This shows the process for creating JNE ML is valid and is suitable for using machine learning with the coarse-grained registration problem. All custom architecture two-channel DLCNNs and the custom chain DLCNN were able to more accurately coarsely register the JNE ML subimages compared to the fine-tuned pre-trained VGG-16 approach. Both the two-channel custom DLCNNs and the chain DLCNN were able to generalize well to new imagery that these networks had not previously trained on. Through the contributions of this research, a foundation is laid for future work to be conducted on the UAS global localization problem within the rural forested JNE AOI

    Proceedings of the 3rd Open Source Geospatial Research & Education Symposium OGRS 2014

    Get PDF
    The third Open Source Geospatial Research & Education Symposium (OGRS) was held in Helsinki, Finland, on 10 to 13 June 2014. The symposium was hosted and organized by the Department of Civil and Environmental Engineering, Aalto University School of Engineering, in partnership with the OGRS Community, on the Espoo campus of Aalto University. These proceedings contain the 20 papers presented at the symposium. OGRS is a meeting dedicated to exchanging ideas in and results from the development and use of open source geospatial software in both research and education.ย  The symposium offers several opportunities for discussing, learning, and presenting results, principles, methods and practices while supporting a primary theme: how to carry out research and educate academic students using, contributing to, and launching open source geospatial initiatives. Participating in open source initiatives can potentially boost innovation as a value creating process requiring joint collaborations between academia, foundations, associations, developer communities and industry. Additionally, open source software can improve the efficiency and impact of university education by introducing open and freely usable tools and research results to students, and encouraging them to get involved in projects. This may eventually lead to new community projects and businesses. The symposium contributes to the validation of the open source model in research and education in geoinformatics

    Preconstruction survey manual 2023

    Get PDF
    This Preconstruction Survey Manual has been developed as a guide to provide uniform design practices for Department and consultant personnel conducting surveys and aerial mapping for Department projects. This manual presents most of the information normally required for preparation of survey requirements for a roadway projec

    Real-Time Visualization for Prevention of Excavation Related Utility Strikes.

    Full text link
    An excavator unintentionally hits a buried utility every 60 seconds in the United States, causing several fatalities and injuries, and billions of dollars in damage each year. Most of these accidents occur either because excavator operators do not know where utilities are buried, or because they cannot perceive where the utilities are relative to the digging excavator. In particular, an operator has no practical means of knowing the distance of an excavatorโ€™s digging implement (e.g. bucket) to the nearest buried obstructions until they are visually exposed, which means that the first estimate of proximity an operator receives is often after the digging implement has already struck the buried utility. The objective of this dissertation was to remedy this situation and explore new proximity monitoring methods for improving the spatial awareness and decision-making capabilities of excavator operators. The research pursued fundamental knowledge in equipment articulation monitoring, and geometric proximity interpretation, and their integration for improving spatial awareness and operator knowledge. A comprehensive computational framework was developed to monitor construction activities in real-time in a concurrent 3D virtual world. As an excavator works, a geometric representation of the real ongoing process is recreated in the virtual environment using 3D models of the excavator, buried utilities and jobsite terrain. Data from sensors installed on the excavator is used to update the position and orientation of the corresponding equipment in the virtual world. Finally, geometric proximity monitoring and collision detection computations are performed between the equipment end-effector and co-located buried utility models to provide distance and impending collision information to the operator, thereby realizing real time knowledge-based excavator operation and control. The outcome of this research has the potential to transform excavator operation from a primarily skill-based activity to a knowledge-based practice, leading to significant increases in construction productivity and safety. This is turn is expected to help realize tangible cost savings and reduction of potential hazards to citizens, improvement in competitiveness of U.S. industry, and reduction in life cycle costs of underground infrastructure.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96133/1/stalmaki_1.pd
    corecore