3,385 research outputs found
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Reducing model bias in a deep learning classifier using domain adversarial neural networks in the MINERvA experiment
We present a simulation-based study using deep convolutional neural networks
(DCNNs) to identify neutrino interaction vertices in the MINERvA passive
targets region, and illustrate the application of domain adversarial neural
networks (DANNs) in this context. DANNs are designed to be trained in one
domain (simulated data) but tested in a second domain (physics data) and
utilize unlabeled data from the second domain so that during training only
features which are unable to discriminate between the domains are promoted.
MINERvA is a neutrino-nucleus scattering experiment using the NuMI beamline at
Fermilab. -dependent cross sections are an important part of the physics
program, and these measurements require vertex finding in complicated events.
To illustrate the impact of the DANN we used a modified set of simulation in
place of physics data during the training of the DANN and then used the label
of the modified simulation during the evaluation of the DANN. We find that deep
learning based methods offer significant advantages over our prior track-based
reconstruction for the task of vertex finding, and that DANNs are able to
improve the performance of deep networks by leveraging available unlabeled data
and by mitigating network performance degradation rooted in biases in the
physics models used for training.Comment: 41 page
Multilevel Artificial Neural Network Training for Spatially Correlated Learning
Multigrid modeling algorithms are a technique used to accelerate relaxation
models running on a hierarchy of similar graphlike structures. We introduce and
demonstrate a new method for training neural networks which uses multilevel
methods. Using an objective function derived from a graph-distance metric, we
perform orthogonally-constrained optimization to find optimal prolongation and
restriction maps between graphs. We compare and contrast several methods for
performing this numerical optimization, and additionally present some new
theoretical results on upper bounds of this type of objective function. Once
calculated, these optimal maps between graphs form the core of Multiscale
Artificial Neural Network (MsANN) training, a new procedure we present which
simultaneously trains a hierarchy of neural network models of varying spatial
resolution. Parameter information is passed between members of this hierarchy
according to standard coarsening and refinement schedules from the multiscale
modelling literature. In our machine learning experiments, these models are
able to learn faster than default training, achieving a comparable level of
error in an order of magnitude fewer training examples.Comment: Manuscript (24 pages) and Supplementary Material (4 pages). Updated
January 2019 to reflect new formulation of MsANN structure and new training
procedur
- …