868 research outputs found
A Critical Review of Deep Learning-Based Multi-Sensor Fusion Techniques
In this review, we provide a detailed coverage of multi-sensor fusion techniques that use RGB stereo images and a sparse LiDAR-projected depth map as input data to output a dense depth map prediction. We cover state-of-the-art fusion techniques which, in recent years, have been deep learning-based methods that are end-to-end trainable. We then conduct a comparative evaluation of the state-of-the-art techniques and provide a detailed analysis of their strengths and limitations as well as the applications they are best suited for
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Depth Estimation from Monocular Images and Sparse Radar Data
In this paper, we explore the possibility of achieving a more accurate depth
estimation by fusing monocular images and Radar points using a deep neural
network. We give a comprehensive study of the fusion between RGB images and
Radar measurements from different aspects and proposed a working solution based
on the observations. We find that the noise existing in Radar measurements is
one of the main key reasons that prevents one from applying the existing fusion
methods developed for LiDAR data and images to the new fusion problem between
Radar data and images. The experiments are conducted on the nuScenes dataset,
which is one of the first datasets which features Camera, Radar, and LiDAR
recordings in diverse scenes and weather conditions. Extensive experiments
demonstrate that our method outperforms existing fusion methods. We also
provide detailed ablation studies to show the effectiveness of each component
in our method.Comment: 9 pages, 6 figures, Accepted to 2020 IEEE International Conference on
Intelligent Robots and Systems (IROS 2020
- …