1,906 research outputs found

    Assessment of Structure from Motion for Reconnaissance Augmentation and Bandwidth Usage Reduction

    Get PDF
    Modern militaries rely upon remote image sensors for real-time intelligence. A typical remote system consists of an unmanned aerial vehicle, or UAV, with an attached camera. A video stream is sent from the UAV, through a bandwidth-constrained satellite connection, to an intelligence processing unit. In this research, an upgrade to this method of collection is proposed. A set of synthetic images of a scene captured by a UAV in a virtual environment is sent to a pipeline of computer vision algorithms, collectively known as Structure from Motion. The output of Structure from Motion, a three-dimensional model, is then assessed in a 3D virtual world as a possible replacement for the images from which it was created. This study shows Structure from Motion results from a modifiable spiral flight path and compares the geoaccuracy of each result. A flattening of height is observed, and an automated compensation for this flattening is performed. Each reconstruction is also compressed, and the size of the compression is compared with the compressed size of the images from which it was created. A reduction of 49-60% of required space is shown

    One-Two-One Network for Compression Artifacts Reduction in Remote Sensing

    Get PDF
    Compression artifacts reduction (CAR) is a challenging problem in the field of remote sensing. Most recent deep learning based methods have demonstrated superior performance over the previous hand-crafted methods. In this paper, we propose an end-to-end one-two-one (OTO) network, to combine different deep models, i.e., summation and difference models, to solve the CAR problem. Particularly, the difference model motivated by the Laplacian pyramid is designed to obtain the high frequency information, while the summation model aggregates the low frequency information. We provide an in-depth investigation into our OTO architecture based on the Taylor expansion, which shows that these two kinds of information can be fused in a nonlinear scheme to gain more capacity of handling complicated image compression artifacts, especially the blocking effect in compression. Extensive experiments are conducted to demonstrate the superior performance of the OTO networks, as compared to the state-of-the-arts on remote sensing datasets and other benchmark datasets. The source code will be available here: https://github.com/bczhangbczhang/

    Using Shadows to Detect Targets in Synthetic Aperture Radar Imagery

    Get PDF
    Synthetic Aperture Radar (SAR) can generate high resolution imagery of re- mote scenes by combining the phase information of multiple radar pulses along a given path. SAR based Intelligence, Surveillance, and Reconnaissance (ISR) has the advantage over optical ISR that it can provide usable imagery in adverse weather or nighttime conditions. Certain radar frequencies can even result in foliage or limited soil penetration, enabling imagery to be created of objects of interest that would otherwise be hidden from optical surveillance systems. This thesis demonstrates the capability of locating stationary targets of interest based on the locations of their shadows and the characteristics of pixel intensity distributions within the SAR imagery. Shadows, in SAR imagery, represent the absence of a detectable signal reflection due to the physical obstruction of the transmitted radar energy. An object\u27s shadow indicates its true geospatial location. This thesis demonstrates target detection based on shadow location using three types of target vehicles, each located in urban and rural clutter scenes, from the publicly available Moving and Stationary Target Acquisition and Recognition (MSTAR) data set. The proposed distribution characterization method for detecting shadows demonstrates the capability of isolating distinct regions within SAR imagery and using the junctions between shadow and non-shadow regions to locate individual shadow-casting objects. Targets of interest are then located within that collection of objects with an average detection accuracy rate of 93%. The shadow-based target detection algorithm results in a lower false alarm rate compared to previous research conducted with the same data set, with 71% fewer false alarms for the same clutter region. Utilizing the absence of signal, in conjunction with surrounding signal reflections, provides accurate stationary target detection. This capability could greatly assist in track initialization or the location of otherwise obscured targets of interest

    CNN๊ธฐ๋ฐ˜์˜ FusionNet ์‹ ๊ฒฝ๋ง๊ณผ ๋†์ง€ ๊ฒฝ๊ณ„์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ํ† ์ง€ํ”ผ๋ณต๋ถ„๋ฅ˜๋ชจ๋ธ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๋†์—…์ƒ๋ช…๊ณผํ•™๋Œ€ํ•™ ์ƒํƒœ์กฐ๊ฒฝ.์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™๋ถ€(์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™์ „๊ณต), 2021. 2. ์†ก์ธํ™.ํ† ์ง€์ด์šฉ์ด ๋น ๋ฅด๊ฒŒ ๋ณ€ํ™”ํ•จ์— ๋”ฐ๋ผ, ํ† ์ง€ ํ”ผ๋ณต์— ๋Œ€ํ•œ ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๋‹ด๊ณ  ์žˆ๋Š” ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ์‹ ์†ํ•œ ์ตœ์‹ ํ™”๋Š” ํ•„์ˆ˜์ ์ด๋‹ค. ํ•˜์ง€๋งŒ, ํ˜„ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„๋Š” ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ๋…ธ๋™๋ ฅ์„ ์š”๊ตฌํ•˜๋Š” manual digitizing ๋ฐฉ๋ฒ•์œผ๋กœ ์ œ์ž‘๋จ์— ๋”ฐ๋ผ, ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ์—…๋ฐ์ดํŠธ ๋ฐ ๋ฐฐํฌ์— ๊ธด ์‹œ๊ฐ„ ๊ฐ„๊ฒฉ์ด ๋ฐœ์ƒํ•˜๋Š” ์‹ค์ •์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” convolutional neural network (CNN) ๊ธฐ๋ฐ˜์˜ ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•˜์—ฌ high-resolution remote sensing (HRRS) ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜๊ณ , ํŠนํžˆ ๋†์ง€ ๊ฒฝ๊ณ„์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜์—ฌ ๋†์—…์ง€์—ญ์—์„œ ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ๊ฐœ์„ ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ์ „์ฒ˜๋ฆฌ(pre-processing) ๋ชจ๋“ˆ, ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜(land cover classification) ๋ชจ๋“ˆ, ๊ทธ๋ฆฌ๊ณ  ํ›„์ฒ˜๋ฆฌ(post-processing) ๋ชจ๋“ˆ์˜ ์„ธ ๋ชจ๋“ˆ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ์ „์ฒ˜๋ฆฌ ๋ชจ๋“ˆ์€ ์ž…๋ ฅ๋œ HRRS ์˜์ƒ์„ 75%์”ฉ ์ค‘์ฒฉ ๋ถ„ํ• ํ•˜์—ฌ ๊ด€์ ์„ ๋‹ค์–‘ํ™”ํ•˜๋Š” ๋ชจ๋“ˆ๋กœ, ํ•œ ๊ด€์ ์—์„œ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ„๋ฅ˜ํ•  ๋•Œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์˜ค๋ถ„๋ฅ˜๋ฅผ ์ค„์ด๊ณ ์ž ํ•˜์˜€๋‹ค. ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ๋ชจ๋“ˆ์€ FusionNet model ๊ตฌ์กฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๊ฐœ๋ฐœ๋˜์—ˆ๊ณ , ์ด๋Š” ๋ถ„ํ• ๋œ HRRS ์ด๋ฏธ์ง€์˜ ํ”ฝ์…€๋ณ„๋กœ ์ตœ์  ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ€์—ฌํ•˜๋„๋ก ์„ค๊ณ„๋˜์—ˆ๋‹ค. ํ›„์ฒ˜๋ฆฌ ๋ชจ๋“ˆ์€ ํ”ฝ์…€๋ณ„ ์ตœ์ข… ํ† ์ง€ ํ”ผ๋ณต์„ ๊ฒฐ์ •ํ•˜๋Š” ๋ชจ๋“ˆ๋กœ, ๋ถ„ํ• ๋œ HRRS ์ด๋ฏธ์ง€์˜ ๋ถ„๋ฅ˜๊ฒฐ๊ณผ๋ฅผ ์ทจํ•ฉํ•˜์—ฌ ์ตœ๋นˆ๊ฐ’์„ ์ตœ์ข… ํ† ์ง€ ํ”ผ๋ณต์œผ๋กœ ๊ฒฐ์ •ํ•œ๋‹ค. ์ถ”๊ฐ€๋กœ ๋†์ง€์—์„œ๋Š” ๋†์ง€๊ฒฝ๊ณ„๋ฅผ ์ถ”์ถœํ•˜๊ณ , ํ•„์ง€๋ณ„ ๋ถ„๋ฅ˜๋œ ํ† ์ง€ ํ”ผ๋ณต์„ ์ง‘๊ณ„ํ•˜์—ฌ ํ•œ ํ•„์ง€์— ๊ฐ™์€ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ€์—ฌํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ์ „๋ผ๋‚จ๋„ ์ง€์—ญ(๋ฉด์ : 547 km2)์˜ 2018๋…„ ์ •์‚ฌ์˜์ƒ๊ณผ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•™์Šต๋˜์—ˆ๋‹ค. ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ ๊ฒ€์ฆ์€ ํ•™์Šต์ง€์—ญ๊ณผ ์‹œ๊ฐ„, ๊ณต๊ฐ„์ ์œผ๋กœ ๊ตฌ๋ถ„๋œ, 2018๋…„ ์ „๋ผ๋‚จ๋„ ์ˆ˜๋ถ๋ฉด๊ณผ 2016๋…„ ์ถฉ์ฒญ๋ถ๋„ ๋Œ€์†Œ๋ฉด์˜ ๋‘ ๊ฒ€์ฆ์ง€์—ญ์—์„œ ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ๊ฐ ๊ฒ€์ฆ์ง€์—ญ์—์„œ overall accuracy๋Š” 0.81, 0.71๋กœ ์ง‘๊ณ„๋˜์—ˆ๊ณ , kappa coefficients๋Š” 0.75, 0.64๋กœ ์‚ฐ์ •๋˜์–ด substantial ์ˆ˜์ค€์˜ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ํ™•์ธํ•˜์˜€๋‹ค. ํŠนํžˆ, ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ์€ ํ•„์ง€ ๊ฒฝ๊ณ„๋ฅผ ๊ณ ๋ คํ•œ ๋†์—…์ง€์—ญ์—์„œ overall accuracy 0.89, kappa coefficient 0.81๋กœ almost perfect ์ˆ˜์ค€์˜ ์šฐ์ˆ˜ํ•œ ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ๋ณด์˜€๋‹ค. ์ด์— ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ํŠนํžˆ ๋†์—…์ง€์—ญ์—์„œ ํ˜„ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•˜์—ฌ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•œ ์ตœ์‹ ํ™”์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋œ๋‹ค.The rapid update of land cover maps is necessary because spatial information of land cover is widely used in various areas. However, these maps have been released or updated in the interval of several years primarily owing to the manual digitizing method, which is time-consuming and labor-intensive. This study was aimed to develop a land cover classification model using the concept of a convolutional neural network (CNN) that classifies land cover labels from high-resolution remote sensing (HRRS) images and to increase the classification accuracy in agricultural areas using the parcel boundary extraction algorithm. The developed model comprises three modules, namely the pre-processing, land cover classification, and post-processing modules. The pre-processing module diversifies the perspective of the HRRS images by separating images with 75% overlaps to reduce the misclassification that can occur in a single image. The land cover classification module was designed based on the FusionNet model structure, and the optimal land cover type was assigned for each pixel of the separated HRRS images. The post-processing module determines the ultimate land cover types for each pixel unit by summing up the several-perspective classification results and aggregating the pixel-classification result for the parcel-boundary unit in agricultural areas. The developed model was trained with land cover maps and orthographic images (area: 547 km2) from the Jeonnam province in Korea. Model validation was conducted with two spatially and temporally different sites including Subuk-myeon of Jeonnam province in 2018 and Daseo-myeon of Chungbuk province in 2016. In the respective validation sites, the models overall accuracies were 0.81 and 0.71, and kappa coefficients were 0.75 and 0.64, implying substantial model performance. The model performance was particularly better when considering parcel boundaries in agricultural areas, exhibiting an overall accuracy of 0.89 and kappa coefficient 0.81 (almost perfect). It was concluded that the developed model may help perform rapid and accurate land cover updates especially for agricultural areas.Chapter 1. Introduction 1 1.1. Study background 1 1.2. Objective of thesis 4 Chapter 2. Literature review 6 2.1. Development of remote sensing technique 6 2.2. Land cover segmentation 9 2.3. Land boundary extraction 13 Chapter 3. Development of the land cover classification model 15 3.1. Conceptual structure of the land cover classification model 15 3.2. Pre-processing module 16 3.3. CNN based land cover classification module 17 3.4. Post processing module 22 3.4.1 Determination of land cover in a pixel unit 22 3.4.2 Aggregation of land cover to parcel boundary 24 Chapter 4. Verification of the land cover classification model 30 4.1. Study area and data acquisition 31 4.1.1. Training area 31 4.1.2. Verification area 32 4.1.3. Data acquisition 33 4.2. Training the land cover classification model 36 4.3. Verification method 37 4.3.1. The performance measurement methods of land cover classification model 37 4.3.2. Accuracy estimation methods of agricultural parcel boundary 39 4.3.3. Comparison of boundary based classification result with ERDAS Imagine 41 4.4. Verification of land cover classification model 42 4.4.1. Performance of land cover classification at the child subcategory 42 4.4.2. Classification accuracy of the aggregated land cover to main category 46 4.4.3. Classification accuracy of boundary based aggregation in agricultural area 57 Chapter 5. Conclusions 71 Reference 73 ๊ตญ ๋ฌธ ์ดˆ ๋ก 83Maste

    The Effects of Signal and Image Compression of SAR Data on Change Detection Algorithms

    Get PDF
    With massive amounts of SAR imagery and data being collected, the need for effective compression techniques is growing. One of the most popular applications for remote sensing is change detection, which compares two geo-registered images for changes in the scene. While lossless compression is needed for signal compression, the same is not often required for image compression. In almost every case the compression ratios are much higher in lossy compression making them more appealing when bandwidth and storage becomes an issue. This research analyzes different types of compression techniques that are adapted for SAR imagery, and tests these techniques with three different change detection algorithms. Many algorithms exist that allow large compression ratios, however, the usefulness of the data is always the final concern. It is necessary to identify compression methods that will not degrade the performance of change detection analysis

    Convolutional Neural Networks - Generalizability and Interpretations

    Get PDF

    Clearing the Clouds: Extracting 3D information from amongst the noise

    Get PDF
    Advancements permitting the rapid extraction of 3D point clouds from a variety of imaging modalities across the global landscape have provided a vast collection of high fidelity digital surface models. This has created a situation with unprecedented overabundance of 3D observations which greatly outstrips our current capacity to manage and infer actionable information. While years of research have removed some of the manual analysis burden for many tasks, human analysis is still a cornerstone of 3D scene exploitation. This is especially true for complex tasks which necessitate comprehension of scale, texture and contextual learning. In order to ameliorate the interpretation burden and enable scientific discovery from this volume of data, new processing paradigms are necessary to keep pace. With this context, this dissertation advances fundamental and applied research in 3D point cloud data pre-processing and deep learning from a variety of platforms. We show that the representation of 3D point data is often not ideal and sacrifices fidelity, context or scalability. First ground scanning terrestrial LIght Detection And Ranging (LiDAR) models are shown to have an inherent statistical bias, and present a state of the art method for correcting this, while preserving data fidelity and maintaining semantic structure. This technique is assessed in the dense canopy of Micronesia, with our technique being the best at retaining high levels of detail under extreme down-sampling (\u3c 1%). Airborne systems are then explored with a method which is presented to pre-process data to preserve a global contrast and semantic content in deep learners. This approach is validated with a building footprint detection task from airborne imagery captured in Eastern TN from the 3D Elevation Program (3DEP), our approach was found to achieve significant accuracy improvements over traditional techniques. Finally, topography data spanning the globe is used to assess past and previous global land cover change. Utilizing Shuttle Radar Topography Mission (SRTM) and Moderate Resolution Imaging Spectroradiometer (MODIS) data, paired with the airborne preprocessing technique described previously, a model for predicting land-cover change from topography observations is described. The culmination of these efforts have the potential to enhance the capabilities of automated 3D geospatial processing, substantially lightening the burden of analysts, with implications improving our responses to global security, disaster response, climate change, structural design and extraplanetary exploration
    • โ€ฆ
    corecore