1,780 research outputs found
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Automated High-resolution Earth Observation Image Interpretation: Outcome of the 2020 Gaofen Challenge
In this article, we introduce the 2020 Gaofen Challenge and relevant scientific outcomes. The 2020 Gaofen Challenge is an international competition, which is organized by the China High-Resolution Earth Observation Conference Committee and the Aerospace Information Research Institute, Chinese Academy of Sciences and technically cosponsored by the IEEE Geoscience and Remote Sensing Society and the International Society for Photogrammetry and Remote Sensing. It aims at promoting the academic development of automated high-resolution earth observation image interpretation. Six independent tracks have been organized in this challenge, which cover the challenging problems in the field of object detection and semantic segmentation. With the development of convolutional neural networks, deep-learning-based methods have achieved good performance on image interpretation. In this article, we report the details and the best-performing methods presented so far in the scope of this challenge
Remote Sensing Object Detection Meets Deep Learning: A Meta-review of Challenges and Advances
Remote sensing object detection (RSOD), one of the most fundamental and
challenging tasks in the remote sensing field, has received longstanding
attention. In recent years, deep learning techniques have demonstrated robust
feature representation capabilities and led to a big leap in the development of
RSOD techniques. In this era of rapid technical evolution, this review aims to
present a comprehensive review of the recent achievements in deep learning
based RSOD methods. More than 300 papers are covered in this review. We
identify five main challenges in RSOD, including multi-scale object detection,
rotated object detection, weak object detection, tiny object detection, and
object detection with limited supervision, and systematically review the
corresponding methods developed in a hierarchical division manner. We also
review the widely used benchmark datasets and evaluation metrics within the
field of RSOD, as well as the application scenarios for RSOD. Future research
directions are provided for further promoting the research in RSOD.Comment: Accepted with IEEE Geoscience and Remote Sensing Magazine. More than
300 papers relevant to the RSOD filed were reviewed in this surve
Unlocking the capabilities of explainable fewshot learning in remote sensing
Recent advancements have significantly improved the efficiency and
effectiveness of deep learning methods for imagebased remote sensing tasks.
However, the requirement for large amounts of labeled data can limit the
applicability of deep neural networks to existing remote sensing datasets. To
overcome this challenge, fewshot learning has emerged as a valuable approach
for enabling learning with limited data. While previous research has evaluated
the effectiveness of fewshot learning methods on satellite based datasets,
little attention has been paid to exploring the applications of these methods
to datasets obtained from UAVs, which are increasingly used in remote sensing
studies. In this review, we provide an up to date overview of both existing and
newly proposed fewshot classification techniques, along with appropriate
datasets that are used for both satellite based and UAV based data. Our
systematic approach demonstrates that fewshot learning can effectively adapt to
the broader and more diverse perspectives that UAVbased platforms can provide.
We also evaluate some SOTA fewshot approaches on a UAV disaster scene
classification dataset, yielding promising results. We emphasize the importance
of integrating XAI techniques like attention maps and prototype analysis to
increase the transparency, accountability, and trustworthiness of fewshot
models for remote sensing. Key challenges and future research directions are
identified, including tailored fewshot methods for UAVs, extending to unseen
tasks like segmentation, and developing optimized XAI techniques suited for
fewshot remote sensing problems. This review aims to provide researchers and
practitioners with an improved understanding of fewshot learnings capabilities
and limitations in remote sensing, while highlighting open problems to guide
future progress in efficient, reliable, and interpretable fewshot methods.Comment: Under review, once the paper is accepted, the copyright will be
transferred to the corresponding journa
Self-supervised remote sensing feature learning: Learning Paradigms, Challenges, and Future Works
Deep learning has achieved great success in learning features from massive
remote sensing images (RSIs). To better understand the connection between
feature learning paradigms (e.g., unsupervised feature learning (USFL),
supervised feature learning (SFL), and self-supervised feature learning
(SSFL)), this paper analyzes and compares them from the perspective of feature
learning signals, and gives a unified feature learning framework. Under this
unified framework, we analyze the advantages of SSFL over the other two
learning paradigms in RSIs understanding tasks and give a comprehensive review
of the existing SSFL work in RS, including the pre-training dataset,
self-supervised feature learning signals, and the evaluation methods. We
further analyze the effect of SSFL signals and pre-training data on the learned
features to provide insights for improving the RSI feature learning. Finally,
we briefly discuss some open problems and possible research directions.Comment: 24 pages, 11 figures, 3 table
Final Report DE-EE0005380: Assessment of Offshore Wind Farm Effects on Sea Surface, Subsurface and Airborne Electronic Systems
Offshore wind energy is a valuable resource that can provide a significant boost to the US renewable energy portfolio. A current constraint to the development of offshore wind farms is the potential for interference to be caused by large wind farms on existing electronic and acoustical equipment such as radar and sonar systems for surveillance, navigation and communications. The US Department of Energy funded this study as an objective assessment of possible interference to various types of equipment operating in the marine environment where offshore wind farms could be installed. The objective of this project was to conduct a baseline evaluation of electromagnetic and acoustical challenges to sea surface, subsurface and airborne electronic systems presented by offshore wind farms. To accomplish this goal, the following tasks were carried out: (1) survey electronic systems that can potentially be impacted by large offshore wind farms, and identify impact assessment studies and research and development activities both within and outside the US, (2) engage key stakeholders to identify their possible concerns and operating requirements, (3) conduct first-principle modeling on the interactions of electromagnetic signals with, and the radiation of underwater acoustic signals from, offshore wind farms to evaluate the effect of such interactions on electronic systems, and (4) provide impact assessments, recommend mitigation methods, prioritize future research directions, and disseminate project findings. This report provides a detailed description of the methodologies used to carry out the study, key findings of the study, and a list of recommendations derived based the findings
The application of Earth Observation for mapping soil saturation and the extent and distribution of artificial drainage on Irish farms
Artificial drainage is required to make wet soils productive for farming. However, drainage may have unintended environmental consequences, for example, through increased nutrient loss to surface waters or increased flood risk. It can also have implications for greenhouse gas emissions. Accurate data on soil drainage properties could help mitigate the impact of these consequences. Unfortunately, few countries maintain detailed inventories of artificially-drained areas because of the costs involved in compiling such data. This is further confounded by often inadequate knowledge of drain location and function at farm level. Increasingly, Earth Observation (EO) data is being used map drained areas and detect buried drains. The current study is the first harmonised effort to map the location and extent of artificially-drained soils in Ireland using a suite of EO data and geocomputational techniques.
To map artificially-drained areas, support vector machine (SVM) and random forest (RF) machine learning image classifications were implemented using Landsat 8 multispectral imagery and topographical data. The RF classifier achieved overall accuracy of 91% in a binary segmentation of artifically-drained and poorly-drained classes. Compared with an existing soil drainage map, the RF model indicated that ~44% of soils in the study area could be classed as “drained”. As well as spatial differences, temporal changes in drainage status where detected within a 3 hectare field, where drains installed in 2014 had an effect on grass production. Using the RF model, the area of this field identified as “drained” increased from a low of 25% in 2011 to 68% in 2016. Landsat 8 vegetation indices were also successfully applied to monitoring the recovery of pasture following extreme saturation (flooding). In conjunction with this, additional EO techniques using unmanned aerial systems (UAS) were tested to map overland flow and detect buried drains. A performance assessment of UAS structure-from-motion (SfM) photogrammetry and aerial LiDAR was undertaken for modelling surface runoff (and associated nutrient loss). Overland flow models were created using the SIMWE model in GRASS GIS. Results indicated no statistical difference between models at 1, 2 & 5 m spatial resolution (p< 0.0001). Grass height was identified as an important source of error. Thermal imagery from a UAS was used to identify the locations of artifically drained areas. Using morning and afternoon images to map thermal extrema, significant differences in the rate of heating were identified between drained and undrained locations. Locations of tiled and piped drains were identified with 59 and 64% accuracy within the study area.
Together these methods could enable better management of field drainage on farms, identifying drained areas, as well as the need for maintenance or replacement. They can also assess whether treatments have worked as expected or whether the underlying saturation problems continues. Through the methods developed and described herein, better characterisation of drainage status at field level may be achievable
Advances in Object and Activity Detection in Remote Sensing Imagery
The recent revolution in deep learning has enabled considerable development in the fields of object and activity detection. Visual object detection tries to find objects of target classes with precise localisation in an image and assign each object instance a corresponding class label. At the same time, activity recognition aims to determine the actions or activities of an agent or group of agents based on sensor or video observation data. It is a very important and challenging problem to detect, identify, track, and understand the behaviour of objects through images and videos taken by various cameras. Together, objects and their activity recognition in imaging data captured by remote sensing platforms is a highly dynamic and challenging research topic. During the last decade, there has been significant growth in the number of publications in the field of object and activity recognition. In particular, many researchers have proposed application domains to identify objects and their specific behaviours from air and spaceborne imagery. This Special Issue includes papers that explore novel and challenging topics for object and activity detection in remote sensing images and videos acquired by diverse platforms
- …