726 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Transfer Learning for High Resolution Aerial Image Classification

    Get PDF
    With rapid developments in satellite and sensor technologies, increasing amount of high spatial resolution aerial images have become available. Classification of these images are important for many remote sensing image understanding tasks, such as image retrieval and object detection. Meanwhile, image classification in the computer vision field is revolutionized with recent popularity of the convolutional neural networks (CNN), based on which the state-of-the-art classification results are achieved. Therefore, the idea of applying the CNN for high resolution aerial image classification is straightforward. However, it is not trivial mainly because the amount of labeled images in remote sensing for training a deep neural network is limited. As a result, transfer learning techniques were adopted for this problem, where the CNN used for the classification problem is pre-trained on a larger dataset beforehand. In this paper, we propose a specific fine-tuning strategy that results in better CNN models for aerial image classification. Extensive experiments were carried out using the proposed approach with different CNN architectures. Our proposed method shows competitive results compared to the existing approaches, indicating the superiority of the proposed fine-tuning algorith

    Deep Learning for Early Detection, Identification, and Spatiotemporal Monitoring of Plant Diseases Using Multispectral Aerial Imagery

    Get PDF
    Production of food crops is hampered by the proliferation of crop diseases which cause huge harvest losses. Current crop-health monitoring programs involve the deployment of scouts and experts to detect and identify crop diseases through visual observation. These monitoring schemes are expensive and too slow to offer timely remedial recommendations for preventing the spread of these crop-damaging diseases. There is thus a need for the development of cheaper and faster methods for identifying and monitoring crop diseases. Recent advances in deep learning have enabled the development of automatic and accurate image classification systems. These advances coupled with the widespread availability of multispectral aerial imagery provide a cost-effective method for developing crop-diseases classification tools. However, large datasets are required to train deep learning models, which may be costly and difficult to obtain. Fortunately, models trained on one task can be repurposed for different tasks (with limited data) using transfer learning technique. The purpose of this research was to develop and implement an end-to-end deep learning framework for early detection and continuous monitoring of crop diseases using transfer learning and high resolution, multispectral aerial imagery. In the first study, the technique was used to compare the performance of five pre-trained deep learning convolutional neural networks (VGG16, VGG19, ResNet50, Inception V3, and Xception) in classifying crop diseases for apples, grapes, and tomatoes. The results of the study show that the best performing crop-disease classification models were those trained on the VGG16 network, while those trained on the ResNet50 network had the worst performance. The other studies compared the performance of using transfer learning and different three-band color combinations to train single- and multiple-crop classification models. The results of these studies show that models combining red, near infrared, and blue bands performed better than models trained with the traditional visible spectral band combination of red, green, and blue. The worst performing models were those combining near infrared, green, and blue bands. This research recommends that further studies be undertaken to determine the best band combinations for training single- and multi-label classification models for both crops and plants and diseases that affect them
    • …
    corecore