281 research outputs found
DOC: Deep Open Classification of Text Documents
Traditional supervised learning makes the closed-world assumption that the
classes appeared in the test data must have appeared in training. This also
applies to text learning or text classification. As learning is used
increasingly in dynamic open environments where some new/test documents may not
belong to any of the training classes, identifying these novel documents during
classification presents an important problem. This problem is called open-world
classification or open classification. This paper proposes a novel deep
learning based approach. It outperforms existing state-of-the-art techniques
dramatically.Comment: accepted at EMNLP 201
ODN: Opening the Deep Network for Open-set Action Recognition
In recent years, the performance of action recognition has been significantly
improved with the help of deep neural networks. Most of the existing action
recognition works hold the \textit{closed-set} assumption that all action
categories are known beforehand while deep networks can be well trained for
these categories. However, action recognition in the real world is essentially
an \textit{open-set} problem, namely, it is impossible to know all action
categories beforehand and consequently infeasible to prepare sufficient
training samples for those emerging categories. In this case, applying
closed-set recognition methods will definitely lead to unseen-category errors.
To address this challenge, we propose the Open Deep Network (ODN) for the
open-set action recognition task. Technologically, ODN detects new categories
by applying a multi-class triplet thresholding method, and then dynamically
reconstructs the classification layer and "opens" the deep network by adding
predictors for new categories continually. In order to transfer the learned
knowledge to the new category, two novel methods, Emphasis Initialization and
Allometry Training, are adopted to initialize and incrementally train the new
predictor so that only few samples are needed to fine-tune the model. Extensive
experiments show that ODN can effectively detect and recognize new categories
with little human intervention, thus applicable to the open-set action
recognition tasks in the real world. Moreover, ODN can even achieve comparable
performance to some closed-set methods.Comment: 6 pages, 3 figures, ICME 201
Dropout Sampling for Robust Object Detection in Open-Set Conditions
Dropout Variational Inference, or Dropout Sampling, has been recently
proposed as an approximation technique for Bayesian Deep Learning and evaluated
for image classification and regression tasks. This paper investigates the
utility of Dropout Sampling for object detection for the first time. We
demonstrate how label uncertainty can be extracted from a state-of-the-art
object detection system via Dropout Sampling. We evaluate this approach on a
large synthetic dataset of 30,000 images, and a real-world dataset captured by
a mobile robot in a versatile campus environment. We show that this uncertainty
can be utilized to increase object detection performance under the open-set
conditions that are typically encountered in robotic vision. A Dropout Sampling
network is shown to achieve a 12.3% increase in recall (for the same precision
score as a standard network) and a 15.1% increase in precision (for the same
recall score as the standard network).Comment: to appear in IEEE International Conference on Robotics and Automation
2018 (ICRA 2018
Are Accuracy and Robustness Correlated?
Machine learning models are vulnerable to adversarial examples formed by
applying small carefully chosen perturbations to inputs that cause unexpected
classification errors. In this paper, we perform experiments on various
adversarial example generation approaches with multiple deep convolutional
neural networks including Residual Networks, the best performing models on
ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the
adversarial example generation techniques with respect to the quality of the
produced images, and measure the robustness of the tested machine learning
models to adversarial examples. Finally, we conduct large-scale experiments on
cross-model adversarial portability. We find that adversarial examples are
mostly transferable across similar network topologies, and we demonstrate that
better machine learning models are less vulnerable to adversarial examples.Comment: Accepted for publication at ICMLA 201
Adversarial Robustness: Softmax versus Openmax
Deep neural networks (DNNs) provide state-of-the-art results on various tasks
and are widely used in real world applications. However, it was discovered that
machine learning models, including the best performing DNNs, suffer from a
fundamental problem: they can unexpectedly and confidently misclassify examples
formed by slightly perturbing otherwise correctly recognized inputs. Various
approaches have been developed for efficiently generating these so-called
adversarial examples, but those mostly rely on ascending the gradient of loss.
In this paper, we introduce the novel logits optimized targeting system (LOTS)
to directly manipulate deep features captured at the penultimate layer. Using
LOTS, we analyze and compare the adversarial robustness of DNNs using the
traditional Softmax layer with Openmax, which was designed to provide open set
recognition by defining classes derived from deep representations, and is
claimed to be more robust to adversarial perturbations. We demonstrate that
Openmax provides less vulnerable systems than Softmax to traditional attacks,
however, we show that it can be equally susceptible to more sophisticated
adversarial generation techniques that directly work on deep representations.Comment: Accepted to British Machine Vision Conference (BMVC) 201
- …