23 research outputs found
State-of-the-art and gaps for deep learning on limited training data in remote sensing
Deep learning usually requires big data, with respect to both volume and
variety. However, most remote sensing applications only have limited training
data, of which a small subset is labeled. Herein, we review three
state-of-the-art approaches in deep learning to combat this challenge. The
first topic is transfer learning, in which some aspects of one domain, e.g.,
features, are transferred to another domain. The next is unsupervised learning,
e.g., autoencoders, which operate on unlabeled data. The last is generative
adversarial networks, which can generate realistic looking data that can fool
the likes of both a deep learning network and human. The aim of this article is
to raise awareness of this dilemma, to direct the reader to existing work and
to highlight current gaps that need solving.Comment: arXiv admin note: text overlap with arXiv:1709.0030
Training of Convolutional Neural Network using Transfer Learning for Aedes Aegypti Larvae
The flavivirus epidemiology has reached an alarming rate which haunts the world population including Malaysia. World Health Organization has proposed and practised various methods of vector control through environmental management, chemical and biological orientations. However, from the listed control vectors, the most crucial part to be heeded are non-accessible places like water storage and artificial container. The objective of the study was to acquire and compare various accuracies and cross-entropy errors of the training sets within different learning rates in water storage tank environment which was essential for detection. This experiment performed transfer learning where Inception-V3 was implemented. About 534 images were trained to classify between Aedes Aegypti larvae and float valve within 3 different learning rates. For training accuracy and validation accuracy, learning rates were 0.1; 99.98%, 99.90% and 0.01; 99.91%, 99.77% and 0.001; 99.10%, 99.93%. Cross-entropy errors for training and validation for 0.1 were 0.0021, 0.0184 whereas for 0.01 were 0.0091, 0.0121 and 0.001; 0.0513, 0.0330. Various accuracies and cross-entropy errors of the training sets within the different learning rates were successfully acquired and compared
Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks
Information fusion is an essential part of numerous engineering systems and
biological functions, e.g., human cognition. Fusion occurs at many levels,
ranging from the low-level combination of signals to the high-level aggregation
of heterogeneous decision-making processes. While the last decade has witnessed
an explosion of research in deep learning, fusion in neural networks has not
observed the same revolution. Specifically, most neural fusion approaches are
ad hoc, are not understood, are distributed versus localized, and/or
explainability is low (if present at all). Herein, we prove that the fuzzy
Choquet integral (ChI), a powerful nonlinear aggregation function, can be
represented as a multi-layer network, referred to hereafter as ChIMP. We also
put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient
descent-based optimization in light of the exponential number of ChI inequality
constraints. An additional benefit of ChIMP/iChIMP is that it enables
eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP
is applied to the fusion of a set of heterogeneous architecture deep models in
remote sensing. We show an improvement in model accuracy and our previously
established XAI indices shed light on the quality of our data, model, and its
decisions.Comment: IEEE Transactions on Fuzzy System
DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images
We present the DeepGlobe 2018 Satellite Image Understanding Challenge, which
includes three public competitions for segmentation, detection, and
classification tasks on satellite images. Similar to other challenges in
computer vision domain such as DAVIS and COCO, DeepGlobe proposes three
datasets and corresponding evaluation methodologies, coherently bundled in
three competitions with a dedicated workshop co-located with CVPR 2018.
We observed that satellite imagery is a rich and structured source of
information, yet it is less investigated than everyday images by computer
vision researchers. However, bridging modern computer vision with remote
sensing data analysis could have critical impact to the way we understand our
environment and lead to major breakthroughs in global urban planning or climate
change research. Keeping such bridging objective in mind, DeepGlobe aims to
bring together researchers from different domains to raise awareness of remote
sensing in the computer vision community and vice-versa. We aim to improve and
evaluate state-of-the-art satellite image understanding approaches, which can
hopefully serve as reference benchmarks for future research in the same topic.
In this paper, we analyze characteristics of each dataset, define the
evaluation criteria of the competitions, and provide baselines for each task.Comment: Dataset description for DeepGlobe 2018 Challenge at CVPR 201