2,262 research outputs found
SC-Fuse: A Feature Fusion Approach for Unpaved Road Detection from Remotely Sensed Images
Road network extraction from remote sensing imagery is crucial for numerous applications, ranging from autonomous navigation to urban and rural planning. A particularly challenging aspect is the detection of unpaved roads, often underrepresented in research and data. These roads display variability in texture, width, shape, and surroundings, making their detection quite complex. This thesis addresses these challenges by creating a specialized dataset and introducing the SC-Fuse model.
Our custom dataset comprises high resolution remote sensing imagery which primarily targets unpaved roads of the American Midwest. To capture the diverse seasonal variation and their impact, the dataset includes images from different times of the year, capturing various weather conditions and offering a comprehensive view of these changing conditions.
To detect roads from our custom dataset we developed SC-Fuse model, a novel deep learning architecture designed to extract unpaved road networks from satellite imagery. This model leverages the strengths of dual feature extractors: the Swin Transformer and a Residual CNN. By combining features from these, SC-fuse captures the local as well as the global context of the images. The fusion of these features is done by a Feature Fusion Module which uses Linear Attention Mechanism, to optimize the computational efficiency. A LinkNet based decoder is used to ensure precise road network reconstruction. The evaluation of SC-Fuse model is done using various metrics, including qualitative visual assessments, to test its effectiveness in unpaved road detection.
Advisors: Ashok Samal and Cody Stoll
Geoscience-aware deep learning:A new paradigm for remote sensing
Information extraction is a key activity for remote sensing images. A common distinction exists between knowledge-driven and data-driven methods. Knowledge-driven methods have advanced reasoning ability and interpretability, but have difficulty in handling complicated tasks since prior knowledge is usually limited when facing the highly complex spatial patterns and geoscience phenomena found in reality. Data-driven models, especially those emerging in machine learning (ML) and deep learning (DL), have achieved substantial progress in geoscience and remote sensing applications. Although DL models have powerful feature learning and representation capabilities, traditional DL has inherent problems including working as a black box and generally requiring a large number of labeled training data. The focus of this paper is on methods that integrate domain knowledge, such as geoscience knowledge and geoscience features (GK/GFs), into the design of DL models. The paper introduces the new paradigm of geoscience-aware deep learning (GADL), in which GK/GFs and DL models are combined deeply to extract information from remote sensing data. It first provides a comprehensive summary of GK/GFs used in GADL, which forms the basis for subsequent integration of GK/GFs with DL models. This is followed by a taxonomy of approaches for integrating GK/GFs with DL models. Several approaches are detailed using illustrative examples. Challenges and research prospects in GADL are then discussed. Developing more novel and advanced methods in GADL is expected to become the prevailing trend in advancing remotely sensed information extraction in the future.</p
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Classification using semantic feature and machine learning: Land-use case application
Land cover classification has interested recent works especially for deforestation, urban are monitoring and agricultural land use. Traditional classification approaches have limited accuracy especially for non-heterogeneous land cover. Thus, using machine may improve the classification accuracy. The presented paper deals with the land-use scene recognition on very high-resolution remote sensing imagery. We proposed a new framework based on semantic features, handcrafted features and machine learning classifiers decisions. The method starts by semantic feature extraction using a convolutional neural network. Handcraft features are also extracted based on color and multi-resolution characteristics. Then, the classification stage is processed by three learning machine algorithms. The final classification result performed by majority vote algorithm. The idea behind is to take advantages from semantic features and handcrafted features. The second scope is to use the decision fusion to enhance the classification result. Experimentation results show that the proposed method provides good accuracy and trustable tool for land use image identification
μ΄κ³ ν΄μλ μμ λΆλ₯λ₯Ό μν μν μ λμ μμ± μ κ²½λ§ κΈ°λ°μ μ€μ§λ νμ΅ νλ μμν¬
νμλ
Όλ¬Έ(μμ¬) -- μμΈλνκ΅λνμ : 곡과λν 건μ€ν경곡νλΆ, 2021.8. κΉμ©μΌ.κ³ ν΄μλ μμ λΆλ₯λ ν μ§νΌλ³΅μ§λ μ μ, μμ λΆλ₯, λμ κ³ν λ±μμ λ€μνκ² νμ©λλ λνμ μΈ μμ λΆμ κΈ°μ μ΄λ€. μ΅κ·Ό, μ¬μΈ΅ ν©μ±κ³± μ κ²½λ§ (deep convolutional neural network)μ μμ λΆλ₯ λΆμΌμμ λκ°μ 보μ¬μλ€. νΉν, μ¬μΈ΅ ν©μ±κ³± μ κ²½λ§ κΈ°λ°μ μλ―Έλ‘ μ μμ λΆν (semantic segmentation) κΈ°λ²μ μ°μ° λΉμ©μ λ§€μ° κ°μμν€λ©°, μ΄λ¬ν μ μ μ§μμ μΌλ‘ κ³ ν΄μλ λ°μ΄ν°κ° μΆμ λκ³ μλ κ³ ν΄μλ μμμ λΆμν λ μ€μνκ² μμ©λλ€.
μ¬μΈ΅ νμ΅ (deep learning) κΈ°λ° κΈ°λ²μ΄ μμ μ μΈ μ±λ₯μ λ¬μ±νκΈ° μν΄μλ μΌλ°μ μΌλ‘ μΆ©λΆν μμ λΌλ²¨λ§λ λ°μ΄ν° (labeled data)κ° ν보λμ΄μΌ νλ€. κ·Έλ¬λ, μ격νμ¬ λΆμΌμμ κ³ ν΄μλ μμμ λν μ°Έμ‘°λ°μ΄ν°λ₯Ό μ»λ κ²μ λΉμ©μ μΌλ‘ μ νμ μΈ κ²½μ°κ° λ§λ€. μ΄λ¬ν λ¬Έμ λ₯Ό ν΄κ²°νκΈ° μν΄ λ³Έ λ
Όλ¬Έμμλ λΌλ²¨λ§λ μμκ³Ό λΌλ²¨λ§λμ§ μμ μμ (unlabeled image)μ ν¨κ» μ¬μ©νλ μ€μ§λ νμ΅ νλ μμν¬λ₯Ό μ μνμμΌλ©°, μ΄λ₯Ό ν΅ν΄ κ³ ν΄μλ μμ λΆλ₯λ₯Ό μννμλ€. λ³Έ λ
Όλ¬Έμμλ λΌλ²¨λ§λμ§ μμ μμμ μ¬μ©νκΈ° μν΄μ κ°μ λ μν μ λμ μμ± μ κ²½λ§ (CycleGAN) λ°©λ²μ μ μνμλ€.
μν μ λμ μμ± μ κ²½λ§μ μμ λ³ν λͺ¨λΈ (image translation model)λ‘ μ²μ μ μλμμΌλ©°, νΉν μν μΌκ΄μ± μμ€ ν¨μ (cycle consistency loss function)λ₯Ό ν΅ν΄ νμ΄λ§λμ§ μμ μμ (unpaired image)μ λͺ¨λΈ νμ΅μ νμ©ν μ°κ΅¬μ΄λ€. μ΄λ¬ν μν μΌκ΄μ± μμ€ ν¨μμ μκ°μ λ°μ, λ³Έ λ
Όλ¬Έμμλ λΌλ²¨λ§λμ§ μμ μμμ μ°Έμ‘°λ°μ΄ν°μ νμ΄λ§λμ§ μμ λ°μ΄ν°λ‘ κ°μ£ΌνμμΌλ©°, μ΄λ₯Ό ν΅ν΄ λΌλ²¨λ§λμ§ μμ μμμΌλ‘ λΆλ₯ λͺ¨λΈμ ν¨κ» νμ΅μμΌ°λ€.
μλ§μ λΌλ²¨λ§λμ§ μμ λ°μ΄ν°μ μλμ μΌλ‘ μ μ λΌλ²¨λ§λ λ°μ΄ν°λ₯Ό ν¨κ» νμ©νκΈ° μν΄, λ³Έ λ
Όλ¬Έμ μ§λ νμ΅κ³Ό κ°μ λ μ€μ§λ νμ΅ κΈ°λ°μ μν μ λμ μμ± μ κ²½λ§μ κ²°ν©νμλ€. μ μλ νλ μμν¬λ μν κ³Όμ (cyclic phase), μ λμ κ³Όμ (adversarial phase), μ§λ νμ΅ κ³Όμ (supervised learning phase), μΈ λΆλΆμ ν¬ν¨νκ³ μλ€. λΌλ²¨λ§λ μμμ μ§λ νμ΅ κ³Όμ μμ λΆλ₯ λͺ¨λΈμ νμ΅μν€λ λ°μ μ¬μ©λλ€. μ λμ κ³Όμ κ³Ό μ§λ νμ΅ κ³Όμ μμλ λΌλ²¨λ§λμ§ μμ λ°μ΄ν°κ° μ¬μ©λ μ μμΌλ©°, μ΄λ₯Ό ν΅ν΄ μ μ μμ μ°Έμ‘°λ°μ΄ν°λ‘ μΈν΄ μΆ©λΆν νμ΅λμ§ λͺ»ν λΆλ₯ λͺ¨λΈμ μΆκ°μ μΌλ‘ νμ΅μν¨λ€.
μ μλ νλ μμν¬μ κ²°κ³Όλ 곡곡 λ°μ΄ν°μΈ ISPRS Vaihingen Datasetμ ν΅ν΄ νκ°λμλ€. μ νλ κ²μ¦μ μν΄, μ μλ νλ μμν¬μ κ²°κ³Όλ 5κ°μ λ²€μΉλ§ν¬λ€ (benchmarks)κ³Ό λΉκ΅λμμΌλ©°, μ΄λ μ¬μ©λ λ²€μΉλ§ν¬ λͺ¨λΈλ€μ μ§λ νμ΅κ³Ό μ€μ§λ νμ΅ λ°©λ² λͺ¨λλ₯Ό ν¬ν¨νλ€. μ΄μ λν΄, λ³Έ λ
Όλ¬Έμμλ λΌλ²¨λ§λ λ°μ΄ν°μ λΌλ²¨λ§λμ§ μμ λ°μ΄ν°μ ꡬμ±μ λ°λ₯Έ μν₯μ νμΈνμμΌλ©°, λ€λ₯Έ λΆλ₯ λͺ¨λΈμ λν λ³Έ νλ μμν¬μ μ μ©κ°λ₯μ±μ λν μΆκ°μ μΈ μ€νλ μννμλ€.
μ μλ νλ μμν¬λ λ€λ₯Έ λ²€μΉλ§ν¬λ€κ³Ό λΉκ΅ν΄μ κ°μ₯ λμ μ νλ (μΈ μ€ν μ§μμ λν΄ 0.796, 0.786, 0.784μ μ 체 μ νλ)λ₯Ό λ¬μ±νμλ€. νΉν, κ°μ²΄μ ν¬κΈ°λ λͺ¨μκ³Ό κ°μ νΉμ±μ΄ λ€λ₯Έ μ€ν μ§μμμ κ°μ₯ ν° μ νλ μμΉμ νμΈνμμΌλ©°, μ΄λ¬ν κ²°κ³Όλ₯Ό ν΅ν΄ μ μλ μ€μ§λ νμ΅μ΄ λͺ¨λΈμ μ°μνκ² μ κ·ν(regularization)ν¨μ νμΈνμλ€. λν, μ€μ§λ νμ΅μ ν΅ν΄ ν₯μλλ μ νλλ λΌλ²¨λ§λ λ°μ΄ν°μ λΉν΄ λΌλ²¨λ§λμ§ μμ λ°μ΄ν°κ° μλμ μΌλ‘ λ§μμ λ κ·Έ μ¦κ° νμ΄ λμ± μ»€μ‘λ€. λ§μ§λ§μΌλ‘, μ μλ μ€μ§λ νμ΅ κΈ°λ°μ μν μ λμ μμ± μ κ²½λ§ κΈ°λ²μ΄ UNet μΈμλ FPNκ³Ό PSPNetμ΄λΌλ λ€λ₯Έ λΆλ₯ λͺ¨λΈμμλ μ μλ―Έν μ νλ μμΉμ 보μλ€. μ΄λ₯Ό ν΅ν΄ λ€λ₯Έ λΆλ₯ λͺ¨λΈμ λν μ μλ νλ μμν¬μ μ μ©κ°λ₯μ±μ νμΈνμλ€Image classification of Very High Resolution (VHR) images is a fundamental task in the remote sensing domain for various applications such as land cover mapping, vegetation mapping, and urban planning. In recent years, deep convolutional neural networks have shown promising performance in image classification studies. In particular, semantic segmentation models with fully convolutional architecture-based networks demonstrated great improvements in terms of computational cost, which has become especially important with the large accumulation of VHR images in recent years.
However, deep learning-based approaches are generally limited by the need of a sufficient amount of labeled data to obtain stable accuracy, and acquiring reference labels of remotely-sensed VHR images is very labor-extensive and expensive. To overcome this problem, this thesis proposed a semi-supervised learning framework for VHR image classification. Semi-supervised learning uses both labeled and unlabeled data together, thus reducing the modelβs dependency on data labels. To address this issue, this thesis employed a modified CycleGAN model to utilize large amounts of unlabeled images.
CycleGAN is an image translation model which was developed from Generative Adversarial Networks (GAN) for image generation. CycleGAN trains unpaired dataset by using cycle consistency loss with two generators and two discriminators. Inspired by the concept of cycle consistency, this thesis modified CycleGAN to enable the use of unlabeled VHR data in model training by considering the unlabeled images as images unpaired with their corresponding ground truth maps.
To utilize a large amount of unlabeled VHR data and a relatively small amount of labeled VHR data, this thesis combined a supervised learning classification model with the modified CycleGAN architecture. The proposed framework contains three phases: cyclic phase, adversarial phase, and supervised learning phase. Through the three phase, both labeled and unlabeled data can be utilized simultaneously to train the model in an end-to-end manner.
The result of the proposed framework was evaluated by using an open-source VHR image dataset, referred to as the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen dataset. To validate the accuracy of the proposed framework, benchmark models including both supervised and semi-supervised learning methods were compared on the same dataset. Furthermore, two additional experiments were conducted to confirm the impact of labeled and unlabeled data on classification accuracy and adaptation of the CycleGAN model for other classification models. These results were evaluated by the popular three metrics for image classification: Overall Accuracy (OA), F1-score, and mean Intersection over Union (mIoU).
The proposed framework achieved the highest accuracy (OA: 0.796, 0.786, and 0.784, respectively in three test sites) in comparison to the other five benchmarks. In particular, in a test site containing numerous objects with various properties, the largest increase in accuracy was observed due to the regularization effect from the semi-supervised method using unlabeled data with the modified CycleGAN. Moreover, by controlling the amount of labeled and unlabeled data, results indicated that a relatively sufficient amount of unlabeled and labeled data is required to increase the accuracy when using the semi-supervised CycleGAN. Lastly, this thesis applied the proposed CycleGAN method to other classification models such as the feature pyramid network (FPN) and the pyramid scene parsing network (PSPNet), in place of UNet. In all cases, the proposed framework returned significantly improved results, displaying the frameworkβs applicability for semi-supervised image classification on remotely-sensed VHR images.1. Introduction 1
2. Background and Related Works 6
2.1. Deep Learning for Image Classification 6
2.1.1. Image-level Classifiaction 6
2.1.2. Fully Convolutional Architectures 7
2.1.3. Semantic Segmentation for Remote Sensing Images 9
2.2. Generative Adversarial Networks (GAN) 12
2.2.1. Introduction to GAN 12
2.2.2. Image Translation 14
2.2.3. GAN for Semantic Segmentation 16
3. Proposed Framework 20
3.1. Modification of CycleGAN 22
3.2. Feed-forward Path of the Proposed Framework 23
3.2.1. Cyclic Phase 23
3.2.2. Adversarial Phase 23
3.2.3. Supervised Learning Phase 24
3.3. Loss Function for Back-propagation 25
3.4. Proposed Network Architecture 28
3.4.1. Generator Architecture 28
3.4.2. Discriminator Architecture 29
4. Experimental Design 31
4.1. Overall Workflow 33
4.2. Vaihingen Dataset 38
4.3. Implementation Details 40
4.4. Metrics for Quantitative Evaluation 41
5. Results and Discussion 42
5.1. Performance Evaluation of the Proposed Feamwork 42
5.2. Comparison of Classification Performance in the Proposed Framework and Benchmarks 45
5.3. Impact of labeled and Unlabeled Data for Semi-supervised Learning 52
5.4. Cycle Consistency in Semi-supervised Learning 55
5.5. Adaptation of the GAN Framework for Other Classification Models 59
6. Conclusion 62
Reference 65
κ΅λ¬Έ μ΄λ‘ 69μ
- β¦