11,373 research outputs found
Manipulation and generation of synthetic satellite images using deep learning models
Generation and manipulation of digital images based on deep learning (DL) are receiving increasing attention for both benign and malevolent uses. As the importance of satellite imagery is increasing, DL has started being used also for the generation of synthetic satellite images. However, the direct use of techniques developed for computer vision applications is not possible, due to the different nature of satellite images. The goal of our work is to describe a number of methods to generate manipulated and synthetic satellite images. To be specific, we focus on two different types of manipulations: full image modification and local splicing. In the former case, we rely on generative adversarial networks commonly used for style transfer applications, adapting them to implement two different kinds of transfer: (i) land cover transfer, aiming at modifying the image content from vegetation to barren and vice versa and (ii) season transfer, aiming at modifying the image content from winter to summer and vice versa. With regard to local splicing, we present two different architectures. The first one uses image generative pretrained transformer and is trained on pixel sequences in order to predict pixels in semantically consistent regions identified using watershed segmentation. The second technique uses a vision transformer operating on image patches rather than on a pixel by pixel basis. We use the trained vision transformer to generate synthetic image segments and splice them into a selected region of the to-be-manipulated image. All the proposed methods generate highly realistic, synthetic, and satellite images. Among the possible applications of the proposed techniques, we mention the generation of proper datasets for the evaluation and training of tools for the analysis of satellite images. (c) The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI
A Weakly Supervised Approach for Estimating Spatial Density Functions from High-Resolution Satellite Imagery
We propose a neural network component, the regional aggregation layer, that
makes it possible to train a pixel-level density estimator using only
coarse-grained density aggregates, which reflect the number of objects in an
image region. Our approach is simple to use and does not require
domain-specific assumptions about the nature of the density function. We
evaluate our approach on several synthetic datasets. In addition, we use this
approach to learn to estimate high-resolution population and housing density
from satellite imagery. In all cases, we find that our approach results in
better density estimates than a commonly used baseline. We also show how our
housing density estimator can be used to classify buildings as residential or
non-residential.Comment: 10 pages, 8 figures. ACM SIGSPATIAL 2018, Seattle, US
Effective Cloud Detection and Segmentation using a Gradient-Based Algorithm for Satellite Imagery; Application to improve PERSIANN-CCS
Being able to effectively identify clouds and monitor their evolution is one
important step toward more accurate quantitative precipitation estimation and
forecast. In this study, a new gradient-based cloud-image segmentation
technique is developed using tools from image processing techniques. This
method integrates morphological image gradient magnitudes to separable cloud
systems and patches boundaries. A varying scale-kernel is implemented to reduce
the sensitivity of image segmentation to noise and capture objects with various
finenesses of the edges in remote-sensing images. The proposed method is
flexible and extendable from single- to multi-spectral imagery. Case studies
were carried out to validate the algorithm by applying the proposed
segmentation algorithm to synthetic radiances for channels of the Geostationary
Operational Environmental Satellites (GOES-R) simulated by a high-resolution
weather prediction model. The proposed method compares favorably with the
existing cloud-patch-based segmentation technique implemented in the
PERSIANN-CCS (Precipitation Estimation from Remotely Sensed Information using
Artificial Neural Network - Cloud Classification System) rainfall retrieval
algorithm. Evaluation of event-based images indicates that the proposed
algorithm has potential to improve rain detection and estimation skills with an
average of more than 45% gain comparing to the segmentation technique used in
PERSIANN-CCS and identifying cloud regions as objects with accuracy rates up to
98%
Investigating SAR algorithm for spaceborne interferometric oil spill detection
The environmental damages and recovery of terrestrial ecosystems from oil spills can last decades. Oil spills have been responsible for loss of aquamarine lives, organisms, trees, vegetation, birds and wildlife. Although there are several methods through which oil spills can be detected, it can be argued that remote sensing via the use of spaceborne platforms provides enormous benefits. This paper will provide more efficient means and methods that can assist in improving oil spill responses. The objective of this research is to develop a signal processing algorithm that can be used for detecting oil spills using spaceborne SAR interferometry (InSAR) data. To this end, a pendulum formation of multistatic smallSAR carrying platforms in a near equatorial orbit is described. The characteristic parameters such as the effects of incidence angles on radar backscatter, which support the detection of oil spills, will be the main drivers for determining the relative positions of the small satellites in formation. The orbit design and baseline distances between each spaceborne SAR platform will also be discussed. Furthermore, results from previous analysis on coverage assessment and revisit time shall be highlighted. Finally, an evaluation of automatic algorithm techniques for oil spill detection in SAR images will be conducted and results presented. The framework for the automatic algorithm considered consists of three major steps. The segmentation stage, where techniques that suggest the use of thresholding for dark spot segmentation within the captured InSAR image scene is conducted. The feature extraction stage involves the geometry and shape of the segmented region where elongation of the oil slick is considered an important feature and a function of the width and the length of the oil slick. For the classification stage, where the major objective is to distinguish oil spills from look-alikes, a Mahalanobis classifier will be used to estimate the probability of the extracted features being oil spills. The validation process of the algorithm will be conducted by using NASA’s UAVSAR data obtained over the Gulf of coast oil spill and RADARSAT-1 dat
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
The agricultural impact of the 2015–2016 floods in Ireland as mapped through Sentinel 1 satellite imagery
peer-reviewedIrish Journal of Agricultural and Food Research | Volume 58: Issue 1
The agricultural impact of the 2015–2016 floods in Ireland as mapped through Sentinel 1 satellite imagery
R. O’Haraemail
, S. Green
and T. McCarthy
DOI: https://doi.org/10.2478/ijafr-2019-0006 | Published online: 11 Oct 2019
PDF
Abstract
Article
PDF
References
Recommendations
Abstract
The capability of Sentinel 1 C-band (5 cm wavelength) synthetic aperture radio detection and ranging (RADAR) (abbreviated as SAR) for flood mapping is demonstrated, and this approach is used to map the extent of the extensive floods that occurred throughout the Republic of Ireland in the winter of 2015–2016. Thirty-three Sentinel 1 images were used to map the area and duration of floods over a 6-mo period from November 2015 to April 2016. Flood maps for 11 separate dates charted the development and persistence of floods nationally. The maximum flood extent during this period was estimated to be ~24,356 ha. The depth of rainfall influenced the magnitude of flood in the preceding 5 d and over more extended periods to a lesser degree. Reduced photosynthetic activity on farms affected by flooding was observed in Landsat 8 vegetation index difference images compared to the previous spring. The accuracy of the flood map was assessed against reports of flooding from affected farms, as well as other satellite-derived maps from Copernicus Emergency Management Service and Sentinel 2. Monte Carlo simulated elevation data (20 m resolution, 2.5 m root mean square error [RMSE]) were used to estimate the flood’s depth and volume. Although the modelled flood height showed a strong correlation with the measured river heights, differences of several metres were observed. Future mapping strategies are discussed, which include high–temporal-resolution soil moisture data, as part of an integrated multisensor approach to flood response over a range of spatial scales
- …