571 research outputs found
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Buildings Detection in VHR SAR Images Using Fully Convolution Neural Networks
This paper addresses the highly challenging problem of automatically
detecting man-made structures especially buildings in very high resolution
(VHR) synthetic aperture radar (SAR) images. In this context, the paper has two
major contributions: Firstly, it presents a novel and generic workflow that
initially classifies the spaceborne TomoSAR point clouds generated by
processing VHR SAR image stacks using advanced interferometric techniques known
as SAR tomography (TomoSAR) into buildings and non-buildings with the aid
of auxiliary information (i.e., either using openly available 2-D building
footprints or adopting an optical image classification scheme) and later back
project the extracted building points onto the SAR imaging coordinates to
produce automatic large-scale benchmark labelled (buildings/non-buildings) SAR
datasets. Secondly, these labelled datasets (i.e., building masks) have been
utilized to construct and train the state-of-the-art deep Fully Convolution
Neural Networks with an additional Conditional Random Field represented as a
Recurrent Neural Network to detect building regions in a single VHR SAR image.
Such a cascaded formation has been successfully employed in computer vision and
remote sensing fields for optical image classification but, to our knowledge,
has not been applied to SAR images. The results of the building detection are
illustrated and validated over a TerraSAR-X VHR spotlight SAR image covering
approximately 39 km almost the whole city of Berlin with mean
pixel accuracies of around 93.84%Comment: Accepted publication in IEEE TGR
Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery
Change detection is one of the central problems in earth observation and was
extensively investigated over recent decades. In this paper, we propose a novel
recurrent convolutional neural network (ReCNN) architecture, which is trained
to learn a joint spectral-spatial-temporal feature representation in a unified
framework for change detection in multispectral images. To this end, we bring
together a convolutional neural network (CNN) and a recurrent neural network
(RNN) into one end-to-end network. The former is able to generate rich
spectral-spatial feature representations, while the latter effectively analyzes
temporal dependency in bi-temporal images. In comparison with previous
approaches to change detection, the proposed network architecture possesses
three distinctive properties: 1) It is end-to-end trainable, in contrast to
most existing methods whose components are separately trained or computed; 2)
it naturally harnesses spatial information that has been proven to be
beneficial to change detection task; 3) it is capable of adaptively learning
the temporal dependency between multitemporal images, unlike most of algorithms
that use fairly simple operation like image differencing or stacking. As far as
we know, this is the first time that a recurrent convolutional network
architecture has been proposed for multitemporal remote sensing image analysis.
The proposed network is validated on real multispectral data sets. Both visual
and quantitative analysis of experimental results demonstrates competitive
performance in the proposed mode
Automatic vision based fault detection on electricity transmission components using very highresolution
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesElectricity is indispensable to modern-day governments and citizenry’s day-to-day operations.
Fault identification is one of the most significant bottlenecks faced by Electricity transmission and
distribution utilities in developing countries to deliver credible services to customers and ensure
proper asset audit and management for network optimization and load forecasting. This is due to
data scarcity, asset inaccessibility and insecurity, ground-surveys complexity, untimeliness, and
general human cost. In this context, we exploit the use of oblique drone imagery with a high spatial
resolution to monitor four major Electric power transmission network (EPTN) components
condition through a fine-tuned deep learning approach, i.e., Convolutional Neural Networks
(CNNs). This study explored the capability of the Single Shot Multibox Detector (SSD), a onestage
object detection model on the electric transmission power line imagery to localize, classify
and inspect faults present. The components fault considered include the broken insulator plate,
missing insulator plate, missing knob, and rusty clamp. The adopted network used a CNN based
on a multiscale layer feature pyramid network (FPN) using aerial image patches and ground truth
to localise and detect faults via a one-phase procedure. The SSD Rest50 architecture variation
performed the best with a mean Average Precision of 89.61%. All the developed SSD based
models achieve a high precision rate and low recall rate in detecting the faulty components, thus
achieving acceptable balance levels F1-score and representation. Finally, comparable to other
works of literature within this same domain, deep-learning will boost timeliness of EPTN inspection
and their component fault mapping in the long - run if these deep learning architectures are widely
understood, adequate training samples exist to represent multiple fault characteristics; and the
effects of augmenting available datasets, balancing intra-class heterogeneity, and small-scale
datasets are clearly understood
Towards Daily High-resolution Inundation Observations using Deep Learning and EO
Satellite remote sensing presents a cost-effective solution for synoptic
flood monitoring, and satellite-derived flood maps provide a computationally
efficient alternative to numerical flood inundation models traditionally used.
While satellites do offer timely inundation information when they happen to
cover an ongoing flood event, they are limited by their spatiotemporal
resolution in terms of their ability to dynamically monitor flood evolution at
various scales. Constantly improving access to new satellite data sources as
well as big data processing capabilities has unlocked an unprecedented number
of possibilities in terms of data-driven solutions to this problem.
Specifically, the fusion of data from satellites, such as the Copernicus
Sentinels, which have high spatial and low temporal resolution, with data from
NASA SMAP and GPM missions, which have low spatial but high temporal
resolutions could yield high-resolution flood inundation at a daily scale. Here
a Convolutional-Neural-Network is trained using flood inundation maps derived
from Sentinel-1 Synthetic Aperture Radar and various hydrological,
topographical, and land-use based predictors for the first time, to predict
high-resolution probabilistic maps of flood inundation. The performance of UNet
and SegNet model architectures for this task is evaluated, using flood masks
derived from Sentinel-1 and Sentinel-2, separately with 95 percent-confidence
intervals. The Area under the Curve (AUC) of the Precision Recall Curve
(PR-AUC) is used as the main evaluation metric, due to the inherently
imbalanced nature of classes in a binary flood mapping problem, with the best
model delivering a PR-AUC of 0.85
The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion
While deep learning techniques have an increasing impact on many technical
fields, gathering sufficient amounts of training data is a challenging problem
in remote sensing. In particular, this holds for applications involving data
from multiple sensors with heterogeneous characteristics. One example for that
is the fusion of synthetic aperture radar (SAR) data and optical imagery. With
this paper, we publish the SEN1-2 dataset to foster deep learning research in
SAR-optical data fusion. SEN1-2 comprises 282,384 pairs of corresponding image
patches, collected from across the globe and throughout all meteorological
seasons. Besides a detailed description of the dataset, we show exemplary
results for several possible applications, such as SAR image colorization,
SAR-optical image matching, and creation of artificial optical images from SAR
input data. Since SEN1-2 is the first large open dataset of this kind, we
believe it will support further developments in the field of deep learning for
remote sensing as well as multi-sensor data fusion.Comment: accepted for publication in the ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences (online from October 2018
- …