83,478 research outputs found
Fast Single-Class Classification and the Principle of Logit Separation
We consider neural network training, in applications in which there are many
possible classes, but at test-time, the task is a binary classification task of
determining whether the given example belongs to a specific class, where the
class of interest can be different each time the classifier is applied. For
instance, this is the case for real-time image search. We define the Single
Logit Classification (SLC) task: training the network so that at test-time, it
would be possible to accurately identify whether the example belongs to a given
class in a computationally efficient manner, based only on the output logit for
this class. We propose a natural principle, the Principle of Logit Separation,
as a guideline for choosing and designing losses suitable for the SLC. We show
that the cross-entropy loss function is not aligned with the Principle of Logit
Separation. In contrast, there are known loss functions, as well as novel batch
loss functions that we propose, which are aligned with this principle. In
total, we study seven loss functions. Our experiments show that indeed in
almost all cases, losses that are aligned with the Principle of Logit
Separation obtain at least 20% relative accuracy improvement in the SLC task
compared to losses that are not aligned with it, and sometimes considerably
more. Furthermore, we show that fast SLC does not cause any drop in binary
classification accuracy, compared to standard classification in which all
logits are computed, and yields a speedup which grows with the number of
classes. For instance, we demonstrate a 10x speedup when the number of classes
is 400,000. Tensorflow code for optimizing the new batch losses is publicly
available at https://github.com/cruvadom/Logit Separation.Comment: Published as a conference paper in ICDM 201
Artificial Sequences and Complexity Measures
In this paper we exploit concepts of information theory to address the
fundamental problem of identifying and defining the most suitable tools to
extract, in a automatic and agnostic way, information from a generic string of
characters. We introduce in particular a class of methods which use in a
crucial way data compression techniques in order to define a measure of
remoteness and distance between pairs of sequences of characters (e.g. texts)
based on their relative information content. We also discuss in detail how
specific features of data compression techniques could be used to introduce the
notion of dictionary of a given sequence and of Artificial Text and we show how
these new tools can be used for information extraction purposes. We point out
the versatility and generality of our method that applies to any kind of
corpora of character strings independently of the type of coding behind them.
We consider as a case study linguistic motivated problems and we present
results for automatic language recognition, authorship attribution and self
consistent-classification.Comment: Revised version, with major changes, of previous "Data Compression
approach to Information Extraction and Classification" by A. Baronchelli and
V. Loreto. 15 pages; 5 figure
Multitask learning without label correspondences
We propose an algorithm to perform multitask learning where each task has potentially distinct label sets and label correspondences are not readily available. This is in contrast with existing methods which either assume that the label sets shared by different tasks are the same or that there exists a label mapping oracle. Our method directly maximizes the mutual information among the labels, and we show that the resulting objective function can be efficiently optimized using existing algorithms. Our proposed approach has a direct application for data integration with different label spaces for the purpose of classification, such as integrating Yahoo! and DMOZ web directories
Pairwise Confusion for Fine-Grained Visual Classification
Fine-Grained Visual Classification (FGVC) datasets contain small sample
sizes, along with significant intra-class variation and inter-class similarity.
While prior work has addressed intra-class variation using localization and
segmentation techniques, inter-class similarity may also affect feature
learning and reduce classification performance. In this work, we address this
problem using a novel optimization procedure for the end-to-end neural network
training on FGVC tasks. Our procedure, called Pairwise Confusion (PC) reduces
overfitting by intentionally {introducing confusion} in the activations. With
PC regularization, we obtain state-of-the-art performance on six of the most
widely-used FGVC datasets and demonstrate improved localization ability. {PC}
is easy to implement, does not need excessive hyperparameter tuning during
training, and does not add significant overhead during test time.Comment: Camera-Ready version for ECCV 201
Anchor Loss: Modulating Loss Scale Based on Prediction Difficulty
We propose a novel loss function that dynamically re-scales the cross entropy based on prediction difficulty regarding a sample. Deep neural network architectures in image classification tasks struggle to disambiguate visually similar objects. Likewise, in human pose estimation symmetric body parts often confuse the network with assigning indiscriminative scores to them. This is due to the output prediction, in which only the highest confidence label is selected without taking into consideration a measure of uncertainty. In this work, we define the prediction difficulty as a relative property coming from the confidence score gap between positive and negative labels. More precisely, the proposed loss function penalizes the network to avoid the score of a false prediction being significant. To demonstrate the efficacy of our loss function, we evaluate it on two different domains: image classification and human pose estimation. We find improvements in both applications by achieving higher accuracy compared to the baseline methods
A hierarchical loss and its problems when classifying non-hierarchically
Failing to distinguish between a sheepdog and a skyscraper should be worse
and penalized more than failing to distinguish between a sheepdog and a poodle;
after all, sheepdogs and poodles are both breeds of dogs. However, existing
metrics of failure (so-called "loss" or "win") used in textual or visual
classification/recognition via neural networks seldom leverage a-priori
information, such as a sheepdog being more similar to a poodle than to a
skyscraper. We define a metric that, inter alia, can penalize failure to
distinguish between a sheepdog and a skyscraper more than failure to
distinguish between a sheepdog and a poodle. Unlike previously employed
possibilities, this metric is based on an ultrametric tree associated with any
given tree organization into a semantically meaningful hierarchy of a
classifier's classes. An ultrametric tree is a tree with a so-called
ultrametric distance metric such that all leaves are at the same distance from
the root. Unfortunately, extensive numerical experiments indicate that the
standard practice of training neural networks via stochastic gradient descent
with random starting points often drives down the hierarchical loss nearly as
much when minimizing the standard cross-entropy loss as when trying to minimize
the hierarchical loss directly. Thus, this hierarchical loss is unreliable as
an objective for plain, randomly started stochastic gradient descent to
minimize; the main value of the hierarchical loss may be merely as a meaningful
metric of success of a classifier.Comment: 19 pages, 4 figures, 7 table
- …