609 research outputs found
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Learnable Reconstruction Methods from RGB Images to Hyperspectral Imaging: A Survey
Hyperspectral imaging enables versatile applications due to its competence in
capturing abundant spatial and spectral information, which are crucial for
identifying substances. However, the devices for acquiring hyperspectral images
are expensive and complicated. Therefore, many alternative spectral imaging
methods have been proposed by directly reconstructing the hyperspectral
information from lower-cost, more available RGB images. We present a thorough
investigation of these state-of-the-art spectral reconstruction methods from
the widespread RGB images. A systematic study and comparison of more than 25
methods has revealed that most of the data-driven deep learning methods are
superior to prior-based methods in terms of reconstruction accuracy and quality
despite lower speeds. This comprehensive review can serve as a fruitful
reference source for peer researchers, thus further inspiring future
development directions in related domains
DGCNet: An Efficient 3D-Densenet based on Dynamic Group Convolution for Hyperspectral Remote Sensing Image Classification
Deep neural networks face many problems in the field of hyperspectral image
classification, lack of effective utilization of spatial spectral information,
gradient disappearance and overfitting as the model depth increases. In order
to accelerate the deployment of the model on edge devices with strict latency
requirements and limited computing power, we introduce a lightweight model
based on the improved 3D-Densenet model and designs DGCNet. It improves the
disadvantage of group convolution. Referring to the idea of dynamic network,
dynamic group convolution(DGC) is designed on 3d convolution kernel. DGC
introduces small feature selectors for each grouping to dynamically decide
which part of the input channel to connect based on the activations of all
input channels. Multiple groups can capture different and complementary visual
and semantic features of input images, allowing convolution neural network(CNN)
to learn rich features. 3D convolution extracts high-dimensional and redundant
hyperspectral data, and there is also a lot of redundant information between
convolution kernels. DGC module allows 3D-Densenet to select channel
information with richer semantic features and discard inactive regions. The
3D-CNN passing through the DGC module can be regarded as a pruned network. DGC
not only allows 3D-CNN to complete sufficient feature extraction, but also
takes into account the requirements of speed and calculation amount. The
inference speed and accuracy have been improved, with outstanding performance
on the IN, Pavia and KSC datasets, ahead of the mainstream hyperspectral image
classification methods
3D Face Reconstruction from Light Field Images: A Model-free Approach
Reconstructing 3D facial geometry from a single RGB image has recently
instigated wide research interest. However, it is still an ill-posed problem
and most methods rely on prior models hence undermining the accuracy of the
recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI)
obtained from light field cameras and learn CNN models that recover horizontal
and vertical 3D facial curves from the respective horizontal and vertical EPIs.
Our 3D face reconstruction network (FaceLFnet) comprises a densely connected
architecture to learn accurate 3D facial curves from low resolution EPIs. To
train the proposed FaceLFnets from scratch, we synthesize photo-realistic light
field images from 3D facial scans. The curve by curve 3D face estimation
approach allows the networks to learn from only 14K images of 80 identities,
which still comprises over 11 Million EPIs/curves. The estimated facial curves
are merged into a single pointcloud to which a surface is fitted to get the
final 3D face. Our method is model-free, requires only a few training samples
to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single
light field images under varying poses, expressions and lighting conditions.
Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces
reconstruction errors by over 20% compared to recent state of the art
- …