2,707 research outputs found
Training Group Orthogonal Neural Networks with Privileged Information
Learning rich and diverse representations is critical for the performance of
deep convolutional neural networks (CNNs). In this paper, we consider how to
use privileged information to promote inherent diversity of a single CNN model
such that the model can learn better representations and offer stronger
generalization ability. To this end, we propose a novel group orthogonal
convolutional neural network (GoCNN) that learns untangled representations
within each layer by exploiting provided privileged information and enhances
representation diversity effectively. We take image classification as an
example where image segmentation annotations are used as privileged information
during the training process. Experiments on two benchmark datasets -- ImageNet
and PASCAL VOC -- clearly demonstrate the strong generalization ability of our
proposed GoCNN model. On the ImageNet dataset, GoCNN improves the performance
of state-of-the-art ResNet-152 model by absolute value of 1.2% while only uses
privileged information of 10% of the training images, confirming effectiveness
of GoCNN on utilizing available privileged knowledge to train better CNNs.Comment: Proceedings of the IJCAI-1
Toy Models of Superposition
Neural networks often pack many unrelated concepts into a single neuron - a
puzzling phenomenon known as 'polysemanticity' which makes interpretability
much more challenging. This paper provides a toy model where polysemanticity
can be fully understood, arising as a result of models storing additional
sparse features in "superposition." We demonstrate the existence of a phase
change, a surprising connection to the geometry of uniform polytopes, and
evidence of a link to adversarial examples. We also discuss potential
implications for mechanistic interpretability.Comment: Also available at
https://transformer-circuits.pub/2022/toy_model/index.htm
A mathematical theory of semantic development in deep neural networks
An extensive body of empirical research has revealed remarkable regularities
in the acquisition, organization, deployment, and neural representation of
human semantic knowledge, thereby raising a fundamental conceptual question:
what are the theoretical principles governing the ability of neural networks to
acquire, organize, and deploy abstract knowledge by integrating across many
individual experiences? We address this question by mathematically analyzing
the nonlinear dynamics of learning in deep linear networks. We find exact
solutions to this learning dynamics that yield a conceptual explanation for the
prevalence of many disparate phenomena in semantic cognition, including the
hierarchical differentiation of concepts through rapid developmental
transitions, the ubiquity of semantic illusions between such transitions, the
emergence of item typicality and category coherence as factors controlling the
speed of semantic processing, changing patterns of inductive projection over
development, and the conservation of semantic similarity in neural
representations across species. Thus, surprisingly, our simple neural model
qualitatively recapitulates many diverse regularities underlying semantic
development, while providing analytic insight into how the statistical
structure of an environment can interact with nonlinear deep learning dynamics
to give rise to these regularities
Toward Understanding Privileged Features Distillation in Learning-to-Rank
In learning-to-rank problems, a privileged feature is one that is available
during model training, but not available at test time. Such features naturally
arise in merchandised recommendation systems; for instance, "user clicked this
item" as a feature is predictive of "user purchased this item" in the offline
data, but is clearly not available during online serving. Another source of
privileged features is those that are too expensive to compute online but
feasible to be added offline. Privileged features distillation (PFD) refers to
a natural idea: train a "teacher" model using all features (including
privileged ones) and then use it to train a "student" model that does not use
the privileged features.
In this paper, we first study PFD empirically on three public ranking
datasets and an industrial-scale ranking problem derived from Amazon's logs. We
show that PFD outperforms several baselines (no-distillation,
pretraining-finetuning, self-distillation, and generalized distillation) on all
these datasets. Next, we analyze why and when PFD performs well via both
empirical ablation studies and theoretical analysis for linear models. Both
investigations uncover an interesting non-monotone behavior: as the predictive
power of a privileged feature increases, the performance of the resulting
student model initially increases but then decreases. We show the reason for
the later decreasing performance is that a very predictive privileged teacher
produces predictions with high variance, which lead to high variance student
estimates and inferior testing performance.Comment: Accepted by NeurIPS 202
Learning Using Privileged Information: SVM+ and Weighted SVM
Prior knowledge can be used to improve predictive performance of learning
algorithms or reduce the amount of data required for training. The same goal is
pursued within the learning using privileged information paradigm which was
recently introduced by Vapnik et al. and is aimed at utilizing additional
information available only at training time -- a framework implemented by SVM+.
We relate the privileged information to importance weighting and show that the
prior knowledge expressible with privileged features can also be encoded by
weights associated with every training example. We show that a weighted SVM can
always replicate an SVM+ solution, while the converse is not true and we
construct a counterexample highlighting the limitations of SVM+. Finally, we
touch on the problem of choosing weights for weighted SVMs when privileged
features are not available.Comment: 18 pages, 8 figures; integrated reviewer comments, improved
typesettin
- …