452 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Expediting Building Footprint Segmentation from High-resolution Remote Sensing Images via progressive lenient supervision

    Full text link
    The efficacy of building footprint segmentation from remotely sensed images has been hindered by model transfer effectiveness. Many existing building segmentation methods were developed upon the encoder-decoder architecture of U-Net, in which the encoder is finetuned from the newly developed backbone networks that are pre-trained on ImageNet. However, the heavy computational burden of the existing decoder designs hampers the successful transfer of these modern encoder networks to remote sensing tasks. Even the widely-adopted deep supervision strategy fails to mitigate these challenges due to its invalid loss in hybrid regions where foreground and background pixels are intermixed. In this paper, we conduct a comprehensive evaluation of existing decoder network designs for building footprint segmentation and propose an efficient framework denoted as BFSeg to enhance learning efficiency and effectiveness. Specifically, a densely-connected coarse-to-fine feature fusion decoder network that facilitates easy and fast feature fusion across scales is proposed. Moreover, considering the invalidity of hybrid regions in the down-sampled ground truth during the deep supervision process, we present a lenient deep supervision and distillation strategy that enables the network to learn proper knowledge from deep supervision. Building upon these advancements, we have developed a new family of building segmentation networks, which consistently surpass prior works with outstanding performance and efficiency across a wide range of newly developed encoder networks. The code will be released on https://github.com/HaonanGuo/BFSeg-Efficient-Building-Footprint-Segmentation-Framework.Comment: 13 pages,8 figures. Submitted to IEEE Transactions on Neural Networks and Learning System

    μ΄ˆκ³ ν•΄μƒλ„ μ˜μƒ λΆ„λ₯˜λ₯Ό μœ„ν•œ μˆœν™˜ μ λŒ€μ  생성 신경망 기반의 쀀지도 ν•™μŠ΅ ν”„λ ˆμž„μ›Œν¬

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ κ±΄μ„€ν™˜κ²½κ³΅ν•™λΆ€, 2021.8. κΉ€μš©μΌ.고해상도 μ˜μƒ λΆ„λ₯˜λŠ” 토지피볡지도 μ œμž‘, 식생 λΆ„λ₯˜, λ„μ‹œ κ³„νš λ“±μ—μ„œ λ‹€μ–‘ν•˜κ²Œ ν™œμš©λ˜λŠ” λŒ€ν‘œμ μΈ μ˜μƒ 뢄석 κΈ°μˆ μ΄λ‹€. 졜근, 심측 ν•©μ„±κ³± 신경망 (deep convolutional neural network)은 μ˜μƒ λΆ„λ₯˜ λΆ„μ•Όμ—μ„œ 두각을 보여왔닀. 특히, 심측 ν•©μ„±κ³± 신경망 기반의 의미둠적 μ˜μƒ λΆ„ν•  (semantic segmentation) 기법은 μ—°μ‚° λΉ„μš©μ„ 맀우 κ°μ†Œμ‹œν‚€λ©°, μ΄λŸ¬ν•œ 점은 μ§€μ†μ μœΌλ‘œ 고해상도 데이터가 μΆ•μ λ˜κ³  μžˆλŠ” 고해상도 μ˜μƒμ„ 뢄석할 λ•Œ μ€‘μš”ν•˜κ²Œ μž‘μš©λœλ‹€. 심측 ν•™μŠ΅ (deep learning) 기반 기법이 μ•ˆμ •μ μΈ μ„±λŠ₯을 λ‹¬μ„±ν•˜κΈ° μœ„ν•΄μ„œλŠ” 일반적으둜 μΆ©λΆ„ν•œ μ–‘μ˜ 라벨링된 데이터 (labeled data)κ°€ ν™•λ³΄λ˜μ–΄μ•Ό ν•œλ‹€. κ·ΈλŸ¬λ‚˜, 원격탐사 λΆ„μ•Όμ—μ„œ 고해상도 μ˜μƒμ— λŒ€ν•œ 참쑰데이터λ₯Ό μ–»λŠ” 것은 λΉ„μš©μ μœΌλ‘œ μ œν•œμ μΈ κ²½μš°κ°€ λ§Žλ‹€. μ΄λŸ¬ν•œ 문제λ₯Ό ν•΄κ²°ν•˜κΈ° μœ„ν•΄ λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 라벨링된 μ˜μƒκ³Ό λΌλ²¨λ§λ˜μ§€ μ•Šμ€ μ˜μƒ (unlabeled image)을 ν•¨κ»˜ μ‚¬μš©ν•˜λŠ” 쀀지도 ν•™μŠ΅ ν”„λ ˆμž„μ›Œν¬λ₯Ό μ œμ•ˆν•˜μ˜€μœΌλ©°, 이λ₯Ό 톡해 고해상도 μ˜μƒ λΆ„λ₯˜λ₯Ό μˆ˜ν–‰ν•˜μ˜€λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λΌλ²¨λ§λ˜μ§€ μ•Šμ€ μ˜μƒμ„ μ‚¬μš©ν•˜κΈ° μœ„ν•΄μ„œ κ°œμ„ λœ μˆœν™˜ μ λŒ€μ  생성 신경망 (CycleGAN) 방법을 μ œμ•ˆν•˜μ˜€λ‹€. μˆœν™˜ μ λŒ€μ  생성 신경망은 μ˜μƒ λ³€ν™˜ λͺ¨λΈ (image translation model)둜 처음 μ œμ•ˆλ˜μ—ˆμœΌλ©°, 특히 μˆœν™˜ 일관성 손싀 ν•¨μˆ˜ (cycle consistency loss function)λ₯Ό 톡해 νŽ˜μ–΄λ§λ˜μ§€ μ•Šμ€ μ˜μƒ (unpaired image)을 λͺ¨λΈ ν•™μŠ΅μ— ν™œμš©ν•œ 연ꡬ이닀. μ΄λŸ¬ν•œ μˆœν™˜ 일관성 손싀 ν•¨μˆ˜μ— μ˜κ°μ„ λ°›μ•„, λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λΌλ²¨λ§λ˜μ§€ μ•Šμ€ μ˜μƒμ„ 참쑰데이터와 νŽ˜μ–΄λ§λ˜μ§€ μ•Šμ€ λ°μ΄ν„°λ‘œ κ°„μ£Όν•˜μ˜€μœΌλ©°, 이λ₯Ό 톡해 λΌλ²¨λ§λ˜μ§€ μ•Šμ€ μ˜μƒμœΌλ‘œ λΆ„λ₯˜ λͺ¨λΈμ„ ν•¨κ»˜ ν•™μŠ΅μ‹œμΌ°λ‹€. μˆ˜λ§Žμ€ λΌλ²¨λ§λ˜μ§€ μ•Šμ€ 데이터와 μƒλŒ€μ μœΌλ‘œ 적은 라벨링된 데이터λ₯Ό ν•¨κ»˜ ν™œμš©ν•˜κΈ° μœ„ν•΄, λ³Έ 논문은 지도 ν•™μŠ΅κ³Ό κ°œμ„ λœ 쀀지도 ν•™μŠ΅ 기반의 μˆœν™˜ μ λŒ€μ  생성 신경망을 κ²°ν•©ν•˜μ˜€λ‹€. μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬λŠ” μˆœν™˜ κ³Όμ •(cyclic phase), μ λŒ€μ  κ³Όμ •(adversarial phase), 지도 ν•™μŠ΅ κ³Όμ •(supervised learning phase), μ„Έ 뢀뢄을 ν¬ν•¨ν•˜κ³  μžˆλ‹€. 라벨링된 μ˜μƒμ€ 지도 ν•™μŠ΅ κ³Όμ •μ—μ„œ λΆ„λ₯˜ λͺ¨λΈμ„ ν•™μŠ΅μ‹œν‚€λŠ” 데에 μ‚¬μš©λœλ‹€. μ λŒ€μ  κ³Όμ •κ³Ό 지도 ν•™μŠ΅ κ³Όμ •μ—μ„œλŠ” λΌλ²¨λ§λ˜μ§€ μ•Šμ€ 데이터가 μ‚¬μš©λ  수 있으며, 이λ₯Ό 톡해 적은 μ–‘μ˜ μ°Έμ‘°λ°μ΄ν„°λ‘œ 인해 μΆ©λΆ„νžˆ ν•™μŠ΅λ˜μ§€ λͺ»ν•œ λΆ„λ₯˜ λͺ¨λΈμ„ μΆ”κ°€μ μœΌλ‘œ ν•™μŠ΅μ‹œν‚¨λ‹€. μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬μ˜ κ²°κ³ΌλŠ” 곡곡 데이터인 ISPRS Vaihingen Dataset을 톡해 ν‰κ°€λ˜μ—ˆλ‹€. 정확도 검증을 μœ„ν•΄, μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬μ˜ κ²°κ³ΌλŠ” 5개의 λ²€μΉ˜λ§ˆν¬λ“€ (benchmarks)κ³Ό λΉ„κ΅λ˜μ—ˆμœΌλ©°, μ΄λ•Œ μ‚¬μš©λœ 벀치마크 λͺ¨λΈλ“€μ€ 지도 ν•™μŠ΅κ³Ό 쀀지도 ν•™μŠ΅ 방법 λͺ¨λ‘λ₯Ό ν¬ν•¨ν•œλ‹€. 이에 더해, λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 라벨링된 데이터와 λΌλ²¨λ§λ˜μ§€ μ•Šμ€ λ°μ΄ν„°μ˜ ꡬ성에 λ”°λ₯Έ 영ν–₯을 ν™•μΈν•˜μ˜€μœΌλ©°, λ‹€λ₯Έ λΆ„λ₯˜ λͺ¨λΈμ— λŒ€ν•œ λ³Έ ν”„λ ˆμž„μ›Œν¬μ˜ μ μš©κ°€λŠ₯성에 λŒ€ν•œ 좔가적인 μ‹€ν—˜λ„ μˆ˜ν–‰ν•˜μ˜€λ‹€. μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬λŠ” λ‹€λ₯Έ λ²€μΉ˜λ§ˆν¬λ“€κ³Ό λΉ„κ΅ν•΄μ„œ κ°€μž₯ 높은 정확도 (μ„Έ μ‹€ν—˜ 지역에 λŒ€ν•΄ 0.796, 0.786, 0.784의 전체 정확도)λ₯Ό λ‹¬μ„±ν•˜μ˜€λ‹€. 특히, 객체의 ν¬κΈ°λ‚˜ λͺ¨μ–‘κ³Ό 같은 νŠΉμ„±μ΄ λ‹€λ₯Έ μ‹€ν—˜ μ§€μ—­μ—μ„œ κ°€μž₯ 큰 정확도 μƒμŠΉμ„ ν™•μΈν•˜μ˜€μœΌλ©°, μ΄λŸ¬ν•œ κ²°κ³Όλ₯Ό 톡해 μ œμ•ˆλœ 쀀지도 ν•™μŠ΅μ΄ λͺ¨λΈμ„ μš°μˆ˜ν•˜κ²Œ μ •κ·œν™”(regularization)함을 ν™•μΈν•˜μ˜€λ‹€. λ˜ν•œ, 쀀지도 ν•™μŠ΅μ„ 톡해 ν–₯μƒλ˜λŠ” μ •ν™•λ„λŠ” 라벨링된 데이터에 λΉ„ν•΄ λΌλ²¨λ§λ˜μ§€ μ•Šμ€ 데이터가 μƒλŒ€μ μœΌλ‘œ λ§Žμ•˜μ„ λ•Œ κ·Έ 증가 폭이 λ”μš± μ»€μ‘Œλ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, μ œμ•ˆλœ 쀀지도 ν•™μŠ΅ 기반의 μˆœν™˜ μ λŒ€μ  생성 신경망 기법이 UNet 외에도 FPNκ³Ό PSPNetμ΄λΌλŠ” λ‹€λ₯Έ λΆ„λ₯˜ λͺ¨λΈμ—μ„œλ„ μœ μ˜λ―Έν•œ 정확도 μƒμŠΉμ„ λ³΄μ˜€λ‹€. 이λ₯Ό 톡해 λ‹€λ₯Έ λΆ„λ₯˜ λͺ¨λΈμ— λŒ€ν•œ μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬μ˜ μ μš©κ°€λŠ₯성을 ν™•μΈν•˜μ˜€λ‹€Image classification of Very High Resolution (VHR) images is a fundamental task in the remote sensing domain for various applications such as land cover mapping, vegetation mapping, and urban planning. In recent years, deep convolutional neural networks have shown promising performance in image classification studies. In particular, semantic segmentation models with fully convolutional architecture-based networks demonstrated great improvements in terms of computational cost, which has become especially important with the large accumulation of VHR images in recent years. However, deep learning-based approaches are generally limited by the need of a sufficient amount of labeled data to obtain stable accuracy, and acquiring reference labels of remotely-sensed VHR images is very labor-extensive and expensive. To overcome this problem, this thesis proposed a semi-supervised learning framework for VHR image classification. Semi-supervised learning uses both labeled and unlabeled data together, thus reducing the model’s dependency on data labels. To address this issue, this thesis employed a modified CycleGAN model to utilize large amounts of unlabeled images. CycleGAN is an image translation model which was developed from Generative Adversarial Networks (GAN) for image generation. CycleGAN trains unpaired dataset by using cycle consistency loss with two generators and two discriminators. Inspired by the concept of cycle consistency, this thesis modified CycleGAN to enable the use of unlabeled VHR data in model training by considering the unlabeled images as images unpaired with their corresponding ground truth maps. To utilize a large amount of unlabeled VHR data and a relatively small amount of labeled VHR data, this thesis combined a supervised learning classification model with the modified CycleGAN architecture. The proposed framework contains three phases: cyclic phase, adversarial phase, and supervised learning phase. Through the three phase, both labeled and unlabeled data can be utilized simultaneously to train the model in an end-to-end manner. The result of the proposed framework was evaluated by using an open-source VHR image dataset, referred to as the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen dataset. To validate the accuracy of the proposed framework, benchmark models including both supervised and semi-supervised learning methods were compared on the same dataset. Furthermore, two additional experiments were conducted to confirm the impact of labeled and unlabeled data on classification accuracy and adaptation of the CycleGAN model for other classification models. These results were evaluated by the popular three metrics for image classification: Overall Accuracy (OA), F1-score, and mean Intersection over Union (mIoU). The proposed framework achieved the highest accuracy (OA: 0.796, 0.786, and 0.784, respectively in three test sites) in comparison to the other five benchmarks. In particular, in a test site containing numerous objects with various properties, the largest increase in accuracy was observed due to the regularization effect from the semi-supervised method using unlabeled data with the modified CycleGAN. Moreover, by controlling the amount of labeled and unlabeled data, results indicated that a relatively sufficient amount of unlabeled and labeled data is required to increase the accuracy when using the semi-supervised CycleGAN. Lastly, this thesis applied the proposed CycleGAN method to other classification models such as the feature pyramid network (FPN) and the pyramid scene parsing network (PSPNet), in place of UNet. In all cases, the proposed framework returned significantly improved results, displaying the framework’s applicability for semi-supervised image classification on remotely-sensed VHR images.1. Introduction 1 2. Background and Related Works 6 2.1. Deep Learning for Image Classification 6 2.1.1. Image-level Classifiaction 6 2.1.2. Fully Convolutional Architectures 7 2.1.3. Semantic Segmentation for Remote Sensing Images 9 2.2. Generative Adversarial Networks (GAN) 12 2.2.1. Introduction to GAN 12 2.2.2. Image Translation 14 2.2.3. GAN for Semantic Segmentation 16 3. Proposed Framework 20 3.1. Modification of CycleGAN 22 3.2. Feed-forward Path of the Proposed Framework 23 3.2.1. Cyclic Phase 23 3.2.2. Adversarial Phase 23 3.2.3. Supervised Learning Phase 24 3.3. Loss Function for Back-propagation 25 3.4. Proposed Network Architecture 28 3.4.1. Generator Architecture 28 3.4.2. Discriminator Architecture 29 4. Experimental Design 31 4.1. Overall Workflow 33 4.2. Vaihingen Dataset 38 4.3. Implementation Details 40 4.4. Metrics for Quantitative Evaluation 41 5. Results and Discussion 42 5.1. Performance Evaluation of the Proposed Feamwork 42 5.2. Comparison of Classification Performance in the Proposed Framework and Benchmarks 45 5.3. Impact of labeled and Unlabeled Data for Semi-supervised Learning 52 5.4. Cycle Consistency in Semi-supervised Learning 55 5.5. Adaptation of the GAN Framework for Other Classification Models 59 6. Conclusion 62 Reference 65 κ΅­λ¬Έ 초둝 69석

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing

    Remote sensing traffic scene retrieval based on learning control algorithm for robot multimodal sensing information fusion and human-machine interaction and collaboration

    Get PDF
    In light of advancing socio-economic development and urban infrastructure, urban traffic congestion and accidents have become pressing issues. High-resolution remote sensing images are crucial for supporting urban geographic information systems (GIS), road planning, and vehicle navigation. Additionally, the emergence of robotics presents new possibilities for traffic management and road safety. This study introduces an innovative approach that combines attention mechanisms and robotic multimodal information fusion for retrieving traffic scenes from remote sensing images. Attention mechanisms focus on specific road and traffic features, reducing computation and enhancing detail capture. Graph neural algorithms improve scene retrieval accuracy. To achieve efficient traffic scene retrieval, a robot equipped with advanced sensing technology autonomously navigates urban environments, capturing high-accuracy, wide-coverage images. This facilitates comprehensive traffic databases and real-time traffic information retrieval for precise traffic management. Extensive experiments on large-scale remote sensing datasets demonstrate the feasibility and effectiveness of this approach. The integration of attention mechanisms, graph neural algorithms, and robotic multimodal information fusion enhances traffic scene retrieval, promising improved information extraction accuracy for more effective traffic management, road safety, and intelligent transportation systems. In conclusion, this interdisciplinary approach, combining attention mechanisms, graph neural algorithms, and robotic technology, represents significant progress in traffic scene retrieval from remote sensing images, with potential applications in traffic management, road safety, and urban planning

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
    • …
    corecore