1,284 research outputs found

    Classification of 3D Point Clouds Using Color Vegetation Indices for Precision Viticulture and Digitizing Applications

    Get PDF
    Remote sensing applied in the digital transformation of agriculture and, more particularly, in precision viticulture offers methods to map field spatial variability to support site-specific management strategies; these can be based on crop canopy characteristics such as the row height or vegetation cover fraction, requiring accurate three-dimensional (3D) information. To derive canopy information, a set of dense 3D point clouds was generated using photogrammetric techniques on images acquired by an RGB sensor onboard an unmanned aerial vehicle (UAV) in two testing vineyards on two different dates. In addition to the geometry, each point also stores information from the RGB color model, which was used to discriminate between vegetation and bare soil. To the best of our knowledge, the new methodology herein presented consisting of linking point clouds with their spectral information had not previously been applied to automatically estimate vine height. Therefore, the novelty of this work is based on the application of color vegetation indices in point clouds for the automatic detection and classification of points representing vegetation and the later ability to determine the height of vines using as a reference the heights of the points classified as soil. Results from on-ground measurements of the heights of individual grapevines were compared with the estimated heights from the UAV point cloud, showing high determination coefficients (Rยฒ > 0.87) and low root-mean-square error (0.070 m). This methodology offers new capabilities for the use of RGB sensors onboard UAV platforms as a tool for precision viticulture and digitizing applications

    Forestry and Arboriculture Applications Using High-Resolution Imagery from Unmanned Aerial Vehicles (UAV)

    Get PDF
    Forests cover over one-third of the planet and provide unmeasurable benefits to the ecosystem. Forest managers have collected and processed countless amounts of data for use in studying, planning, and management of these forests. Data collection has evolved from completely manual operations to the incorporation of technology that has increased the efficiency of data collection and decreased overall costs. Many technological advances have been made that can be incorporated into natural resources disciplines. Laser measuring devices, handheld data collectors and more recently, unmanned aerial vehicles, are just a few items that are playing a major role in the way data is managed and collected. Field hardware has also been aided with new and improved mobile and computer software. Over the course of this study, field technology along with computer advancements have been utilized to aid in forestry and arboricultural applications. Three-dimensional point cloud data that represent tree shape and height were extracted and examined for accuracy. Traditional fieldwork collection (tree height, tree diameter and canopy metrics) was derived from remotely sensed data by using new modeling techniques which will result in time and cost savings. Using high resolution aerial photography, individual tree species are classified to support tree inventory development. Point clouds were used to create digital elevation models (DEM) which can further be used in hydrology analysis, slope, aspect, and hillshades. Digital terrain models (DTM) are in geographic information system (GIS), and along with DEMs, used to create canopy height models (CHM). The results of this study can enhance how the data are utilized and prompt further research and new initiatives that will improve and garner new insight for the use of remotely sensed data in forest management

    Bayesian Methods for Radiometric Calibration in Motion Picture Encoding Workflows

    Get PDF
    A method for estimating the Camera Response Function (CRF) of an electronic motion picture camera is presented in this work. The accurate estimation of the CRF allows for proper encoding of camera exposures into motion picture post-production workflows, like the Academy Color Encoding Specification (ACES), this being a necessary step to correctly combine images from different capture sources into one cohesive final production and minimize non-creative manual adjustments. Although there are well known standard CRFs implemented in typical video camera workflows, motion picture workflows and newer High Dynamic Range (HDR) imaging workflows have introduced new standard CRFs as well as custom and proprietary CRFs that need to be known for proper post-production encoding of the camera footage. Current methods to estimate this function rely on the use of measurement charts, using multiple static images taken under different exposures or lighting conditions, or assume a simplistic model of the functionโ€™s shape. All these methods become problematic and tough to fit into motion picture production and post-production workflows where the use of test charts and varying camera or scene setups becomes impractical and where a method based solely on camera footage, comprised of a single image or a series of images, would be advantageous. This work presents a methodology initially based on the work of Lin, Gu, Yamazaki and Shum that takes into account edge color mixtures in an image or image sequence, that are affected by the non-linearity introduced by a CRF. In addition, a novel feature based on image noise is introduced to overcome some of the limitations of edge color mixtures. These features provide information that is included in the likelihood probability distribution in a Bayesian framework to estimate the CRF as the expected value of a posterior probability distribution, which is itself approximated by a Markov Chain Monte Carlo (MCMC) sampling algorithm. This allows for a more complete description of the CRF over methods like Maximum Likelihood (ML) and Maximum A Posteriori (MAP). The CRF function is modeled by Principal Component Analysis (PCA) of the Database of Response Functions (DoRF) compiled by Grossberg and Nayar, and the prior probability distribution is modeled by a Gaussian Mixture Model (GMM) of the PCA coefficients for the responses in the DoRF. CRF estimation results are presented for an ARRI electronic motion picture camera, showing the improved estimation accuracy and practicality of this method over previous methods for motion picture post-production workflows

    A metrological approach for multispectral photogrammetry

    Get PDF

    A metrological approach for multispectral photogrammetry

    Get PDF
    This paper presents the design and development of a three-dimensional reference object for the metrological quality assessment of photogrammetry-based techniques, for application in the cultural heritage field. The reference object was 3D printed, with nominal manufacturing uncertainty of the order of 0.01 mm. The object has been realized as a dodecahedron, and in each face, a different pictorial preparation has been inserted. The preparations include several pigments, binders, and varnishes, to be representative of the materials and techniques used historically by artists. Since the reference objectโ€™s shape, size and uncertainty are known, it is possible to use this object as a reference to evaluate the quality of a 3D model from the metric point of view. In particular, verification of dimensional precision and accuracy are performed using the standard deviation on measurements acquired on the reference object and the final 3D model. In addition, the object can be used as a reference for UV-induced Visible Luminescence (UVL) acquisition, being the materials employed UV-fluorescent. Results obtained with visible-reflected and UVL images are presented and discussed

    Laser Scanner Technology

    Get PDF
    Laser scanning technology plays an important role in the science and engineering arena. The aim of the scanning is usually to create a digital version of the object surface. Multiple scanning is sometimes performed via multiple cameras to obtain all slides of the scene under study. Usually, optical tests are used to elucidate the power of laser scanning technology in the modern industry and in the research laboratories. This book describes the recent contributions reported by laser scanning technology in different areas around the world. The main topics of laser scanning described in this volume include full body scanning, traffic management, 3D survey process, bridge monitoring, tracking of scanning, human sensing, three-dimensional modelling, glacier monitoring and digitizing heritage monuments

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors โ€“ including laser range scanners, video cameras, and pose estimation hardware โ€“ on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    CNN๊ธฐ๋ฐ˜์˜ FusionNet ์‹ ๊ฒฝ๋ง๊ณผ ๋†์ง€ ๊ฒฝ๊ณ„์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ํ† ์ง€ํ”ผ๋ณต๋ถ„๋ฅ˜๋ชจ๋ธ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๋†์—…์ƒ๋ช…๊ณผํ•™๋Œ€ํ•™ ์ƒํƒœ์กฐ๊ฒฝ.์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™๋ถ€(์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™์ „๊ณต), 2021. 2. ์†ก์ธํ™.ํ† ์ง€์ด์šฉ์ด ๋น ๋ฅด๊ฒŒ ๋ณ€ํ™”ํ•จ์— ๋”ฐ๋ผ, ํ† ์ง€ ํ”ผ๋ณต์— ๋Œ€ํ•œ ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๋‹ด๊ณ  ์žˆ๋Š” ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ์‹ ์†ํ•œ ์ตœ์‹ ํ™”๋Š” ํ•„์ˆ˜์ ์ด๋‹ค. ํ•˜์ง€๋งŒ, ํ˜„ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„๋Š” ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ๋…ธ๋™๋ ฅ์„ ์š”๊ตฌํ•˜๋Š” manual digitizing ๋ฐฉ๋ฒ•์œผ๋กœ ์ œ์ž‘๋จ์— ๋”ฐ๋ผ, ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ์—…๋ฐ์ดํŠธ ๋ฐ ๋ฐฐํฌ์— ๊ธด ์‹œ๊ฐ„ ๊ฐ„๊ฒฉ์ด ๋ฐœ์ƒํ•˜๋Š” ์‹ค์ •์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” convolutional neural network (CNN) ๊ธฐ๋ฐ˜์˜ ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•˜์—ฌ high-resolution remote sensing (HRRS) ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜๊ณ , ํŠนํžˆ ๋†์ง€ ๊ฒฝ๊ณ„์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜์—ฌ ๋†์—…์ง€์—ญ์—์„œ ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ๊ฐœ์„ ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ์ „์ฒ˜๋ฆฌ(pre-processing) ๋ชจ๋“ˆ, ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜(land cover classification) ๋ชจ๋“ˆ, ๊ทธ๋ฆฌ๊ณ  ํ›„์ฒ˜๋ฆฌ(post-processing) ๋ชจ๋“ˆ์˜ ์„ธ ๋ชจ๋“ˆ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ์ „์ฒ˜๋ฆฌ ๋ชจ๋“ˆ์€ ์ž…๋ ฅ๋œ HRRS ์˜์ƒ์„ 75%์”ฉ ์ค‘์ฒฉ ๋ถ„ํ• ํ•˜์—ฌ ๊ด€์ ์„ ๋‹ค์–‘ํ™”ํ•˜๋Š” ๋ชจ๋“ˆ๋กœ, ํ•œ ๊ด€์ ์—์„œ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ„๋ฅ˜ํ•  ๋•Œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์˜ค๋ถ„๋ฅ˜๋ฅผ ์ค„์ด๊ณ ์ž ํ•˜์˜€๋‹ค. ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ๋ชจ๋“ˆ์€ FusionNet model ๊ตฌ์กฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๊ฐœ๋ฐœ๋˜์—ˆ๊ณ , ์ด๋Š” ๋ถ„ํ• ๋œ HRRS ์ด๋ฏธ์ง€์˜ ํ”ฝ์…€๋ณ„๋กœ ์ตœ์  ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ€์—ฌํ•˜๋„๋ก ์„ค๊ณ„๋˜์—ˆ๋‹ค. ํ›„์ฒ˜๋ฆฌ ๋ชจ๋“ˆ์€ ํ”ฝ์…€๋ณ„ ์ตœ์ข… ํ† ์ง€ ํ”ผ๋ณต์„ ๊ฒฐ์ •ํ•˜๋Š” ๋ชจ๋“ˆ๋กœ, ๋ถ„ํ• ๋œ HRRS ์ด๋ฏธ์ง€์˜ ๋ถ„๋ฅ˜๊ฒฐ๊ณผ๋ฅผ ์ทจํ•ฉํ•˜์—ฌ ์ตœ๋นˆ๊ฐ’์„ ์ตœ์ข… ํ† ์ง€ ํ”ผ๋ณต์œผ๋กœ ๊ฒฐ์ •ํ•œ๋‹ค. ์ถ”๊ฐ€๋กœ ๋†์ง€์—์„œ๋Š” ๋†์ง€๊ฒฝ๊ณ„๋ฅผ ์ถ”์ถœํ•˜๊ณ , ํ•„์ง€๋ณ„ ๋ถ„๋ฅ˜๋œ ํ† ์ง€ ํ”ผ๋ณต์„ ์ง‘๊ณ„ํ•˜์—ฌ ํ•œ ํ•„์ง€์— ๊ฐ™์€ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ€์—ฌํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ์ „๋ผ๋‚จ๋„ ์ง€์—ญ(๋ฉด์ : 547 km2)์˜ 2018๋…„ ์ •์‚ฌ์˜์ƒ๊ณผ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•™์Šต๋˜์—ˆ๋‹ค. ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ ๊ฒ€์ฆ์€ ํ•™์Šต์ง€์—ญ๊ณผ ์‹œ๊ฐ„, ๊ณต๊ฐ„์ ์œผ๋กœ ๊ตฌ๋ถ„๋œ, 2018๋…„ ์ „๋ผ๋‚จ๋„ ์ˆ˜๋ถ๋ฉด๊ณผ 2016๋…„ ์ถฉ์ฒญ๋ถ๋„ ๋Œ€์†Œ๋ฉด์˜ ๋‘ ๊ฒ€์ฆ์ง€์—ญ์—์„œ ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ๊ฐ ๊ฒ€์ฆ์ง€์—ญ์—์„œ overall accuracy๋Š” 0.81, 0.71๋กœ ์ง‘๊ณ„๋˜์—ˆ๊ณ , kappa coefficients๋Š” 0.75, 0.64๋กœ ์‚ฐ์ •๋˜์–ด substantial ์ˆ˜์ค€์˜ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ํ™•์ธํ•˜์˜€๋‹ค. ํŠนํžˆ, ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ์€ ํ•„์ง€ ๊ฒฝ๊ณ„๋ฅผ ๊ณ ๋ คํ•œ ๋†์—…์ง€์—ญ์—์„œ overall accuracy 0.89, kappa coefficient 0.81๋กœ almost perfect ์ˆ˜์ค€์˜ ์šฐ์ˆ˜ํ•œ ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ๋ณด์˜€๋‹ค. ์ด์— ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ํŠนํžˆ ๋†์—…์ง€์—ญ์—์„œ ํ˜„ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•˜์—ฌ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•œ ์ตœ์‹ ํ™”์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋œ๋‹ค.The rapid update of land cover maps is necessary because spatial information of land cover is widely used in various areas. However, these maps have been released or updated in the interval of several years primarily owing to the manual digitizing method, which is time-consuming and labor-intensive. This study was aimed to develop a land cover classification model using the concept of a convolutional neural network (CNN) that classifies land cover labels from high-resolution remote sensing (HRRS) images and to increase the classification accuracy in agricultural areas using the parcel boundary extraction algorithm. The developed model comprises three modules, namely the pre-processing, land cover classification, and post-processing modules. The pre-processing module diversifies the perspective of the HRRS images by separating images with 75% overlaps to reduce the misclassification that can occur in a single image. The land cover classification module was designed based on the FusionNet model structure, and the optimal land cover type was assigned for each pixel of the separated HRRS images. The post-processing module determines the ultimate land cover types for each pixel unit by summing up the several-perspective classification results and aggregating the pixel-classification result for the parcel-boundary unit in agricultural areas. The developed model was trained with land cover maps and orthographic images (area: 547 km2) from the Jeonnam province in Korea. Model validation was conducted with two spatially and temporally different sites including Subuk-myeon of Jeonnam province in 2018 and Daseo-myeon of Chungbuk province in 2016. In the respective validation sites, the models overall accuracies were 0.81 and 0.71, and kappa coefficients were 0.75 and 0.64, implying substantial model performance. The model performance was particularly better when considering parcel boundaries in agricultural areas, exhibiting an overall accuracy of 0.89 and kappa coefficient 0.81 (almost perfect). It was concluded that the developed model may help perform rapid and accurate land cover updates especially for agricultural areas.Chapter 1. Introduction 1 1.1. Study background 1 1.2. Objective of thesis 4 Chapter 2. Literature review 6 2.1. Development of remote sensing technique 6 2.2. Land cover segmentation 9 2.3. Land boundary extraction 13 Chapter 3. Development of the land cover classification model 15 3.1. Conceptual structure of the land cover classification model 15 3.2. Pre-processing module 16 3.3. CNN based land cover classification module 17 3.4. Post processing module 22 3.4.1 Determination of land cover in a pixel unit 22 3.4.2 Aggregation of land cover to parcel boundary 24 Chapter 4. Verification of the land cover classification model 30 4.1. Study area and data acquisition 31 4.1.1. Training area 31 4.1.2. Verification area 32 4.1.3. Data acquisition 33 4.2. Training the land cover classification model 36 4.3. Verification method 37 4.3.1. The performance measurement methods of land cover classification model 37 4.3.2. Accuracy estimation methods of agricultural parcel boundary 39 4.3.3. Comparison of boundary based classification result with ERDAS Imagine 41 4.4. Verification of land cover classification model 42 4.4.1. Performance of land cover classification at the child subcategory 42 4.4.2. Classification accuracy of the aggregated land cover to main category 46 4.4.3. Classification accuracy of boundary based aggregation in agricultural area 57 Chapter 5. Conclusions 71 Reference 73 ๊ตญ ๋ฌธ ์ดˆ ๋ก 83Maste
    • โ€ฆ
    corecore