672 research outputs found
Remote Sensing Object Detection Meets Deep Learning: A Meta-review of Challenges and Advances
Remote sensing object detection (RSOD), one of the most fundamental and
challenging tasks in the remote sensing field, has received longstanding
attention. In recent years, deep learning techniques have demonstrated robust
feature representation capabilities and led to a big leap in the development of
RSOD techniques. In this era of rapid technical evolution, this review aims to
present a comprehensive review of the recent achievements in deep learning
based RSOD methods. More than 300 papers are covered in this review. We
identify five main challenges in RSOD, including multi-scale object detection,
rotated object detection, weak object detection, tiny object detection, and
object detection with limited supervision, and systematically review the
corresponding methods developed in a hierarchical division manner. We also
review the widely used benchmark datasets and evaluation metrics within the
field of RSOD, as well as the application scenarios for RSOD. Future research
directions are provided for further promoting the research in RSOD.Comment: Accepted with IEEE Geoscience and Remote Sensing Magazine. More than
300 papers relevant to the RSOD filed were reviewed in this surve
Building detection from aerial imagery using inception ResNet UNet and UNet architectures
Buildings are one of the key components in change detection, urban planning, and monitoring. The automatic extraction of the building from high-resolution aerial imagery is still challenging due to the variations in their shapes, structures, textures, and colours. Recently, the convolutional neural networks (CNN) show a significant improvement in object detection and extraction that surpasses other methods. To extract building, in this paper two segmentation architectures, the UNet and the Inception ResNet UNet are implemented and then tested on the Inria aerial image datasets. The Inception ResNet UNet utilizes the Inception architecture and residual blocks. This makes the model wide and deep, though there are a few differences between numbers of UNet and Inception ResNet UNet parameters. The analyses show that UNet has a high rate of metrics in the training progress. However, on the unseen dataset, Inception ResNet UNet extracts buildings more accurately (97.95% accuracy and 0.96 in the dice metric) in comparison with UNet (94.30% accuracy and 0.55 in the dice metric)
Remote Sensing for International Stability and Security - Integrating GMOSS Achievements in GMES
The Joint Research Centre of the European Commission hosted a two-day workshop "Remote sensing for international stability and security: integrating GMOSS achievements in GMES". Its aim was to disseminate the scientific and technical achievements of the Global Monitoring for Security and Stability (GMOSS) network of excellence to partners of ongoing and future GMES projects such as RESPOND, LIMES, RISK-EOS,PREVIEW, BOSS4GMES, SAFER, G-MOSAIC.
The objectives of this workshop were:
¿ To bring together scientific and technical people from the GMOSS NoE and from thematically related GMES projects.
¿ To discuss and compare alternative technical solutions (e.g. final experimental understanding from GMOSS, operational procedures applied in projects such as RESPOND, pre-operational application procedures foreseen from LIMES, etc.)
¿ To draft a list of technical and scientific challenges relevant in the next future.
¿ To open GMOSS to a wider forum in the JRC
This report contains abstracts of the fifteen contributions presented by European researchers. The different presentations addressed pre-processing, feature recognition, change detection and applications which represents also the structure of the report. The second part includes poster abstracts presented during a separate poster session.JRC.G.2-Global security and crisis managemen
Application of Convolutional Neural Network in the Segmentation and Classification of High-Resolution Remote Sensing Images
Numerous convolution neural networks increase accuracy of classification for remote sensing scene images at the expense of the models space and time sophistication This causes the model to run slowly and prevents the realization of a trade-off among model accuracy and running time The loss of deep characteristics as the network gets deeper makes it impossible to retrieve the key aspects with a sample double branching structure which is bad for classifying remote sensing scene photo
Framework to Create Cloud-Free Remote Sensing Data Using Passenger Aircraft as the Platform
Cloud removal in optical remote sensing imagery is essential for many Earth observation applications.Due to the inherent imaging geometry features in satellite remote sensing, it is impossible to observe the ground under the clouds directly; therefore, cloud removal algorithms are always not perfect owing to the loss of ground truth. Passenger aircraft have the advantages of short visitation frequency and low cost. Additionally, because passenger aircraft fly at lower altitudes compared to satellites, they can observe the ground under the clouds at an oblique viewing angle. In this study, we examine the possibility of creating cloud-free remote sensing data by stacking multi-angle images captured by passenger aircraft. To accomplish this, a processing framework is proposed, which includes four main steps: 1) multi-angle image acquisition from passenger aircraft, 2) cloud detection based on deep learning semantic segmentation models, 3) cloud removal by image stacking, and 4) image quality enhancement via haze removal. This method is intended to remove cloud contamination without the requirements of reference images and pre-determination of cloud types. The proposed method was tested in multiple case studies, wherein the resultant cloud- and haze-free orthophotos were visualized and quantitatively analyzed in various land cover type scenes. The results of the case studies demonstrated that the proposed method could generate high quality, cloud-free orthophotos. Therefore, we conclude that this framework has great potential for creating cloud-free remote sensing images when the cloud removal of satellite imagery is difficult or inaccurate
Learning to Holistically Detect Bridges from Large-Size VHR Remote Sensing Imagery
Bridge detection in remote sensing images (RSIs) plays a crucial role in
various applications, but it poses unique challenges compared to the detection
of other objects. In RSIs, bridges exhibit considerable variations in terms of
their spatial scales and aspect ratios. Therefore, to ensure the visibility and
integrity of bridges, it is essential to perform holistic bridge detection in
large-size very-high-resolution (VHR) RSIs. However, the lack of datasets with
large-size VHR RSIs limits the deep learning algorithms' performance on bridge
detection. Due to the limitation of GPU memory in tackling large-size images,
deep learning-based object detection methods commonly adopt the cropping
strategy, which inevitably results in label fragmentation and discontinuous
prediction. To ameliorate the scarcity of datasets, this paper proposes a
large-scale dataset named GLH-Bridge comprising 6,000 VHR RSIs sampled from
diverse geographic locations across the globe. These images encompass a wide
range of sizes, varying from 2,048*2,048 to 16,38*16,384 pixels, and
collectively feature 59,737 bridges. Furthermore, we present an efficient
network for holistic bridge detection (HBD-Net) in large-size RSIs. The HBD-Net
presents a separate detector-based feature fusion (SDFF) architecture and is
optimized via a shape-sensitive sample re-weighting (SSRW) strategy. Based on
the proposed GLH-Bridge dataset, we establish a bridge detection benchmark
including the OBB and HBB tasks, and validate the effectiveness of the proposed
HBD-Net. Additionally, cross-dataset generalization experiments on two publicly
available datasets illustrate the strong generalization capability of the
GLH-Bridge dataset.Comment: 16 pages, 11 figures, 6 tables; due to the limitation "The abstract
field cannot be longer than 1,920 characters", the abstract appearing here is
slightly shorter than that in the PDF fil
Deep learning methods applied to digital elevation models: state of the art
Deep Learning (DL) has a wide variety of applications in various
thematic domains, including spatial information. Although with
limitations, it is also starting to be considered in operations
related to Digital Elevation Models (DEMs). This study aims to
review the methods of DL applied in the field of altimetric spatial
information in general, and DEMs in particular. Void Filling (VF),
Super-Resolution (SR), landform classification and hydrography
extraction are just some of the operations where traditional methods
are being replaced by DL methods. Our review concludes
that although these methods have great potential, there are
aspects that need to be improved. More appropriate terrain information
or algorithm parameterisation are some of the challenges
that this methodology still needs to face.Functional Quality of Digital Elevation Models in Engineering’ of the State Agency Research of SpainPID2019-106195RB- I00/AEI/10.13039/50110001103
- …