26,479 research outputs found
Multisource and Multitemporal Data Fusion in Remote Sensing
The sharp and recent increase in the availability of data captured by
different sensors combined with their considerably heterogeneous natures poses
a serious challenge for the effective and efficient processing of remotely
sensed data. Such an increase in remote sensing and ancillary datasets,
however, opens up the possibility of utilizing multimodal datasets in a joint
manner to further improve the performance of the processing approaches with
respect to the application at hand. Multisource data fusion has, therefore,
received enormous attention from researchers worldwide for a wide variety of
applications. Moreover, thanks to the revisit capability of several spaceborne
sensors, the integration of the temporal information with the spatial and/or
spectral/backscattering information of the remotely sensed data is possible and
helps to move from a representation of 2D/3D data to 4D data structures, where
the time variable adds new information as well as challenges for the
information extraction algorithms. There are a huge number of research works
dedicated to multisource and multitemporal data fusion, but the methods for the
fusion of different modalities have expanded in different paths according to
each research community. This paper brings together the advances of multisource
and multitemporal data fusion approaches with respect to different research
communities and provides a thorough and discipline-specific starting point for
researchers at different levels (i.e., students, researchers, and senior
researchers) willing to conduct novel investigations on this challenging topic
by supplying sufficient detail and references
Combining multi-source information for crop monitoring
Time series of optical satellite images acquired at high spatial resolution constitute an important source of information for crop monitoring, in particular for keeping track of crop harvest. However, the quantity of information extracted from this source is often restricted by acquisition gaps and uncertainty of radiometric values. This paper presents a novel approach that addresses this issue by combining time series of satellite images with other information from crop modeling and expert knowledge. An application for sugarcane harvest detection on Reunion Island using a SPOT5 time series is detailed. In a fuzzy framework, an expert system was designed and developed to combine multi-source information and to make decisions. This expert system was assessed for two sugarcane farms. Results obtained were in substantial agreement with ground truth data; the overall accuracy reached 96.07%. (Résumé d'auteur
A Survey on Object Detection in Optical Remote Sensing Images
Object detection in optical remote sensing images, being a fundamental but
challenging problem in the field of aerial and satellite image analysis, plays
an important role for a wide range of applications and is receiving significant
attention in recent years. While enormous methods exist, a deep review of the
literature concerning generic object detection is still lacking. This paper
aims to provide a review of the recent progress in this field. Different from
several previously published surveys that focus on a specific object class such
as building and road, we concentrate on more generic object categories
including, but are not limited to, road, building, tree, vehicle, ship,
airport, urban-area. Covering about 270 publications we survey 1) template
matching-based object detection methods, 2) knowledge-based object detection
methods, 3) object-based image analysis (OBIA)-based object detection methods,
4) machine learning-based object detection methods, and 5) five publicly
available datasets and three standard evaluation metrics. We also discuss the
challenges of current studies and propose two promising research directions,
namely deep learning-based feature representation and weakly supervised
learning-based geospatial object detection. It is our hope that this survey
will be beneficial for the researchers to have better understanding of this
research field.Comment: This manuscript is the accepted version for ISPRS Journal of
Photogrammetry and Remote Sensin
Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors
Cloud detection is an important preprocessing step for the precise
application of optical satellite imagery. In this paper, we propose a deep
learning based cloud detection method named multi-scale convolutional feature
fusion (MSCFF) for remote sensing images of different sensors. In the network
architecture of MSCFF, the symmetric encoder-decoder module, which provides
both local and global context by densifying feature maps with trainable
convolutional filter banks, is utilized to extract multi-scale and high-level
spatial features. The feature maps of multiple scales are then up-sampled and
concatenated, and a novel multi-scale feature fusion module is designed to fuse
the features of different scales for the output. The two output feature maps of
the network are cloud and cloud shadow maps, which are in turn fed to binary
classifiers outside the model to obtain the final cloud and cloud shadow mask.
The MSCFF method was validated on hundreds of globally distributed optical
satellite images, with spatial resolutions ranging from 0.5 to 50 m, including
Landsat-5/7/8, Gaofen-1/2/4, Sentinel-2, Ziyuan-3, CBERS-04, Huanjing-1, and
collected high-resolution images exported from Google Earth. The experimental
results show that MSCFF achieves a higher accuracy than the traditional
rule-based cloud detection methods and the state-of-the-art deep learning
models, especially in bright surface covered areas. The effectiveness of MSCFF
means that it has great promise for the practical application of cloud
detection for multiple types of medium and high-resolution remote sensing
images. Our established global high-resolution cloud detection validation
dataset has been made available online.Comment: This manuscript has been accepted for publication in ISPRS Journal of
Photogrammetry and Remote Sensing, vol. 150, pp.197-212, 2019.
(https://doi.org/10.1016/j.isprsjprs.2019.02.017
A review of EO image information mining
We analyze the state of the art of content-based retrieval in Earth
observation image archives focusing on complete systems showing promise for
operational implementation. The different paradigms at the basis of the main
system families are introduced. The approaches taken are analyzed, focusing in
particular on the phases after primitive feature extraction. The solutions
envisaged for the issues related to feature simplification and synthesis,
indexing, semantic labeling are reviewed. The methodologies for query
specification and execution are analyzed
Natural Disasters Detection in Social Media and Satellite imagery: a survey
The analysis of natural disaster-related multimedia content got great
attention in recent years. Being one of the most important sources of
information, social media have been crawled over the years to collect and
analyze disaster-related multimedia content. Satellite imagery has also been
widely explored for disasters analysis. In this paper, we survey the existing
literature on disaster detection and analysis of the retrieved information from
social media and satellites. Literature on disaster detection and analysis of
related multimedia content on the basis of the nature of the content can be
categorized into three groups, namely (i) disaster detection in text; (ii)
analysis of disaster-related visual content from social media; and (iii)
disaster detection in satellite imagery. We extensively review different
approaches proposed in these three domains. Furthermore, we also review
benchmarking datasets available for the evaluation of disaster detection
frameworks. Moreover, we provide a detailed discussion on the insights obtained
from the literature review, and identify future trends and challenges, which
will provide an important starting point for the researchers in the field
A Survey of Data Fusion in Smart City Applications
The advancement of various research sectors such as Internet of Things (IoT),
Machine Learning, Data Mining, Big Data, and Communication Technology has shed
some light in transforming an urban city integrating the aforementioned
techniques to a commonly known term - Smart City. With the emergence of smart
city, plethora of data sources have been made available for wide variety of
applications. The common technique for handling multiple data sources is data
fusion, where it improves data output quality or extracts knowledge from the
raw data. In order to cater evergrowing highly complicated applications,
studies in smart city have to utilize data from various sources and evaluate
their performance based on multiple aspects. To this end, we introduce a
multi-perspectives classification of the data fusion to evaluate the smart city
applications. Moreover, we applied the proposed multi-perspectives
classification to evaluate selected applications in each domain of the smart
city. We conclude the paper by discussing potential future direction and
challenges of data fusion integration.Comment: Accepted and To be published in Elsevier Information Fusio
Broad Neural Network for Change Detection in Aerial Images
A change detection system takes as input two images of a region captured at
two different times, and predicts which pixels in the region have undergone
change over the time period. Since pixel-based analysis can be erroneous due to
noise, illumination difference and other factors, contextual information is
usually used to determine the class of a pixel (changed or not). This
contextual information is taken into account by considering a pixel of the
difference image along with its neighborhood. With the help of ground truth
information, the labeled patterns are generated. Finally, Broad Learning
classifier is used to get prediction about the class of each pixel. Results
show that Broad Learning can classify the data set with a significantly higher
F-Score than that of Multilayer Perceptron. Performance comparison has also
been made with other popular classifiers, namely Multilayer Perceptron and
Random Forest.Comment: : IEMGraph (International Conference on
Emerging Technology in Modelling and Graphics) 2018 : 6-7 September, 2018 :
Kolkatta, Indi
Automatic detection of passable roads after floods in remote sensed and social media data
This paper addresses the problem of floods classification and floods
aftermath detection utilizing both social media and satellite imagery.
Automatic detection of disasters such as floods is still a very challenging
task. The focus lies on identifying passable routes or roads during floods. Two
novel solutions are presented, which were developed for two corresponding tasks
at the MediaEval 2018 benchmarking challenge. The tasks are (i) identification
of images providing evidence for road passability and (ii) differentiation and
detection of passable and non-passable roads in images from two complementary
sources of information. For the first challenge, we mainly rely on object and
scene-level features extracted through multiple deep models pre-trained on the
ImageNet and Places datasets. The object and scene-level features are then
combined using early, late and double fusion techniques. To identify whether or
not it is possible for a vehicle to pass a road in satellite images, we rely on
Convolutional Neural Networks and a transfer learning-based classification
approach. The evaluation of the proposed methods are carried out on the
large-scale datasets provided for the benchmark competition. The results
demonstrate significant improvement in the performance over the recent
state-of-art approaches
Effective Cloud Detection and Segmentation using a Gradient-Based Algorithm for Satellite Imagery; Application to improve PERSIANN-CCS
Being able to effectively identify clouds and monitor their evolution is one
important step toward more accurate quantitative precipitation estimation and
forecast. In this study, a new gradient-based cloud-image segmentation
technique is developed using tools from image processing techniques. This
method integrates morphological image gradient magnitudes to separable cloud
systems and patches boundaries. A varying scale-kernel is implemented to reduce
the sensitivity of image segmentation to noise and capture objects with various
finenesses of the edges in remote-sensing images. The proposed method is
flexible and extendable from single- to multi-spectral imagery. Case studies
were carried out to validate the algorithm by applying the proposed
segmentation algorithm to synthetic radiances for channels of the Geostationary
Operational Environmental Satellites (GOES-R) simulated by a high-resolution
weather prediction model. The proposed method compares favorably with the
existing cloud-patch-based segmentation technique implemented in the
PERSIANN-CCS (Precipitation Estimation from Remotely Sensed Information using
Artificial Neural Network - Cloud Classification System) rainfall retrieval
algorithm. Evaluation of event-based images indicates that the proposed
algorithm has potential to improve rain detection and estimation skills with an
average of more than 45% gain comparing to the segmentation technique used in
PERSIANN-CCS and identifying cloud regions as objects with accuracy rates up to
98%
- …