1,635 research outputs found
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Super-Resolution for Overhead Imagery Using DenseNets and Adversarial Learning
Recent advances in Generative Adversarial Learning allow for new modalities
of image super-resolution by learning low to high resolution mappings. In this
paper we present our work using Generative Adversarial Networks (GANs) with
applications to overhead and satellite imagery. We have experimented with
several state-of-the-art architectures. We propose a GAN-based architecture
using densely connected convolutional neural networks (DenseNets) to be able to
super-resolve overhead imagery with a factor of up to 8x. We have also
investigated resolution limits of these networks. We report results on several
publicly available datasets, including SpaceNet data and IARPA Multi-View
Stereo Challenge, and compare performance with other state-of-the-art
architectures.Comment: 9 pages, 9 figures, WACV 2018 submissio
Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network
The detection performance of small objects in remote sensing images is not
satisfactory compared to large objects, especially in low-resolution and noisy
images. A generative adversarial network (GAN)-based model called enhanced
super-resolution GAN (ESRGAN) shows remarkable image enhancement performance,
but reconstructed images miss high-frequency edge information. Therefore,
object detection performance degrades for small objects on recovered noisy and
low-resolution remote sensing images. Inspired by the success of edge enhanced
GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN
(EESRGAN) to improve the image quality of remote sensing images and use
different detector networks in an end-to-end manner where detector loss is
backpropagated into the EESRGAN to improve the detection performance. We
propose an architecture with three components: ESRGAN, Edge Enhancement Network
(EEN), and Detection network. We use residual-in-residual dense blocks (RRDB)
for both the ESRGAN and EEN, and for the detector network, we use the faster
region-based convolutional network (FRCNN) (two-stage detector) and single-shot
multi-box detector (SSD) (one stage detector). Extensive experiments on a
public (car overhead with context) and a self-assembled (oil and gas storage
tank) satellite dataset show superior performance of our method compared to the
standalone state-of-the-art object detectors.Comment: This paper contains 27 pages and accepted for publication in MDPI
remote sensing journal. GitHub Repository:
https://github.com/Jakaria08/EESRGAN (Implementation
APPLICATION OF KOHONEN SELF-ORGANIZING MAP TO SEARCH FOR REGION OF INTEREST IN THE DETECTION OF OBJECTS
Today, there is a serious need to improve the performance of algorithms for detecting objects in images. This process can be accelerated with the help of preliminary processing, having found areas of interest on the images where the probability of object detection is high. To this end, it is proposed to use the algorithm for distinguishing the boundaries of objects using the Sobel operator and Kohonen self-organizing maps, described in this paper and shown by the example of determining zones of interest when searching and recognizing objects in satellite images. The presented algorithm allows 15–100 times reduction in the amount of data arriving at the convolutional neural network, which provides the final recognition. Also, the algorithm can significantly reduce the number of training images, since the size of the parts of the input image supplied to the convolution network is tied to the image scale and equal to the size of the largest recognizable object, and the object is centered in the frame. This allows to accelerate network learning by more than 5 times and increase recognition accuracy by at least 10 %, as well as halve the required minimum number of layers and neurons of the convolutional network, thereby increasing its speed
A DEEP LEARNING APPROACH FOR AIRPORT RUNWAY IDENTIFICATION FROM SATELLITE IMAGERY
The United States lacks a comprehensive national database of private Prior Permission Required (PPR) airports. The primary reason such a database does not exist is that there are no federal regulatory obligations for these facilities to have their information re-evaluated or updated by the Federal Aviation Administration (FAA) or the local state Department of Transportation (DOT) once the data has been entered into the system. The often outdated and incorrect information about landing sites presents a serious risk factor in aviation safety. In this thesis, we present a machine learning approach for detecting airport landing sites from Google Earth satellite imagery. The approach presented in this thesis plays a crucial role in confirming the FAA\u27s current database and improving aviation safety in the United States. Specifically, we designed, implemented, and evaluated object detection and segmentation techniques for identifying and segmenting the regions of interest in image data. The in-house dataset has been thoroughly annotated that includes 400 satellite images with a total of 700 instances of runways. The images - acquired via Google Maps static API - are 3000x3000 pixels in size. The models were trained using two distinct backbones on a Mask R-CNN architecture: ResNet101, and ResneXt101, and obtained the highest average precision score @0.75 with ResNet-101 at 92% and recall at 89%. We finally hosted the model in the StreamLit front-end platform, allowing users to enter any location to check and confirm the presence of a runway
- …