1,201 research outputs found
IEEE
Sentinel missions provide widespread opportunities of exploiting inter-sensor synergies to improve the operational monitoring of terrestrial photosynthetic activity and canopy structural variations using vegetation indices (VI). In this context, continuous and consistent temporal data are logically required to rapidly detect vegetation changes across sensors. Nonetheless, the existing temporal limitations inherent to satellite orbits, cloud occlusions, data degradation, and many other factors may severely constrain the availability of data involving multiple satellites. In response, this letter proposes a novel deep 3-D convolutional regression network (3CRN) for temporally enhancing Sentinel-3 (S3) VI by taking advantage of inter-sensor Sentinel-2 (S2) observations. Unlike existing regression and deep learning-based methods, the proposed approach allows convolutional kernels to slide across the temporal dimension to exploit not only the higher spatial resolution of the S2 instrument but also its own temporal evolution to better estimate time-resolved VI in S3. To validate the proposed approach, we built a database made of multiple day-synchronized S2 and S3 operational products from a study area in Extremadura (Spain). The conducted experimental comparison, including multiple state-of-the-art regression and deep learning models, shows the statistically significant advantages of the presented framework. The codes of this work will be made available at https://github.com/rufernan/3CRN
Comparison of convolutional neural networks for cloudy optical images reconstruction from single or multitemporal joint SAR and optical images
With the increasing availability of optical and synthetic aperture radar
(SAR) images thanks to the Sentinel constellation, and the explosion of deep
learning, new methods have emerged in recent years to tackle the reconstruction
of optical images that are impacted by clouds. In this paper, we focus on the
evaluation of convolutional neural networks that use jointly SAR and optical
images to retrieve the missing contents in one single polluted optical image.
We propose a simple framework that ease the creation of datasets for the
training of deep nets targeting optical image reconstruction, and for the
validation of machine learning based or deterministic approaches. These methods
are quite different in terms of input images constraints, and comparing them is
a problematic task not addressed in the literature. We show how space
partitioning data structures help to query samples in terms of cloud coverage,
relative acquisition date, pixel validity and relative proximity between SAR
and optical images. We generate several datasets to compare the reconstructed
images from networks that use a single pair of SAR and optical image, versus
networks that use multiple pairs, and a traditional deterministic approach
performing interpolation in temporal domain.Comment: 17 page
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Deep learning for inverse problems in remote sensing: super-resolution and SAR despeckling
L'abstract è presente nell'allegato / the abstract is in the attachmen
Spatial-Temporal Data Mining for Ocean Science: Data, Methodologies, and Opportunities
With the increasing amount of spatial-temporal~(ST) ocean data, numerous
spatial-temporal data mining (STDM) studies have been conducted to address
various oceanic issues, e.g., climate forecasting and disaster warning.
Compared with typical ST data (e.g., traffic data), ST ocean data is more
complicated with some unique characteristics, e.g., diverse regionality and
high sparsity. These characteristics make it difficult to design and train STDM
models. Unfortunately, an overview of these studies is still missing, hindering
computer scientists to identify the research issues in ocean while discouraging
researchers in ocean science from applying advanced STDM techniques. To remedy
this situation, we provide a comprehensive survey to summarize existing STDM
studies in ocean. Concretely, we first summarize the widely-used ST ocean
datasets and identify their unique characteristics. Then, typical ST ocean data
quality enhancement techniques are discussed. Next, we classify existing STDM
studies for ocean into four types of tasks, i.e., prediction, event detection,
pattern mining, and anomaly detection, and elaborate the techniques for these
tasks. Finally, promising research opportunities are highlighted. This survey
will help scientists from the fields of both computer science and ocean science
have a better understanding of the fundamental concepts, key techniques, and
open challenges of STDM in ocean
Physics-Informed Computer Vision: A Review and Perspectives
Incorporation of physical information in machine learning frameworks are
opening and transforming many application domains. Here the learning process is
augmented through the induction of fundamental knowledge and governing physical
laws. In this work we explore their utility for computer vision tasks in
interpreting and understanding visual data. We present a systematic literature
review of formulation and approaches to computer vision tasks guided by
physical laws. We begin by decomposing the popular computer vision pipeline
into a taxonomy of stages and investigate approaches to incorporate governing
physical equations in each stage. Existing approaches in each task are analyzed
with regard to what governing physical processes are modeled, formulated and
how they are incorporated, i.e. modify data (observation bias), modify networks
(inductive bias), and modify losses (learning bias). The taxonomy offers a
unified view of the application of the physics-informed capability,
highlighting where physics-informed learning has been conducted and where the
gaps and opportunities are. Finally, we highlight open problems and challenges
to inform future research. While still in its early days, the study of
physics-informed computer vision has the promise to develop better computer
vision models that can improve physical plausibility, accuracy, data efficiency
and generalization in increasingly realistic applications
- …