12 research outputs found
Hard Mixtures of Experts for Large Scale Weakly Supervised Vision
Training convolutional networks (CNN's) that fit on a single GPU with
minibatch stochastic gradient descent has become effective in practice.
However, there is still no effective method for training large CNN's that do
not fit in the memory of a few GPU cards, or for parallelizing CNN training. In
this work we show that a simple hard mixture of experts model can be
efficiently trained to good effect on large scale hashtag (multilabel)
prediction tasks. Mixture of experts models are not new (Jacobs et. al. 1991,
Collobert et. al. 2003), but in the past, researchers have had to devise
sophisticated methods to deal with data fragmentation. We show empirically that
modern weakly supervised data sets are large enough to support naive
partitioning schemes where each data point is assigned to a single expert.
Because the experts are independent, training them in parallel is easy, and
evaluation is cheap for the size of the model. Furthermore, we show that we can
use a single decoding layer for all the experts, allowing a unified feature
embedding space. We demonstrate that it is feasible (and in fact relatively
painless) to train far larger models than could be practically trained with
standard CNN architectures, and that the extra capacity can be well used on
current datasets.Comment: Appearing in CVPR 201
Predicted Embedding Power Regression for Large-Scale Out-of-Distribution Detection
Out-of-distribution (OOD) inputs can compromise the performance and safety of
real world machine learning systems. While many methods exist for OOD detection
and work well on small scale datasets with lower resolution and few classes,
few methods have been developed for large-scale OOD detection. Existing
large-scale methods generally depend on maximum classification probability,
such as the state-of-the-art grouped softmax method. In this work, we develop a
novel approach that calculates the probability of the predicted class label
based on label distributions learned during the training process. Our method
performs better than current state-of-the-art methods with only a negligible
increase in compute cost. We evaluate our method against contemporary methods
across datasets and achieve a statistically significant improvement with
respect to AUROC (84.2 vs 82.4) and AUPR (96.2 vs 93.7)