6,765 research outputs found
Making Risk Minimization Tolerant to Label Noise
In many applications, the training data, from which one needs to learn a
classifier, is corrupted with label noise. Many standard algorithms such as SVM
perform poorly in presence of label noise. In this paper we investigate the
robustness of risk minimization to label noise. We prove a sufficient condition
on a loss function for the risk minimization under that loss to be tolerant to
uniform label noise. We show that the loss, sigmoid loss, ramp loss and
probit loss satisfy this condition though none of the standard convex loss
functions satisfy it. We also prove that, by choosing a sufficiently large
value of a parameter in the loss function, the sigmoid loss, ramp loss and
probit loss can be made tolerant to non-uniform label noise also if we can
assume the classes to be separable under noise-free data distribution. Through
extensive empirical studies, we show that risk minimization under the
loss, the sigmoid loss and the ramp loss has much better robustness to label
noise when compared to the SVM algorithm
A Convex Feature Learning Formulation for Latent Task Structure Discovery
This paper considers the multi-task learning problem and in the setting where
some relevant features could be shared across few related tasks. Most of the
existing methods assume the extent to which the given tasks are related or
share a common feature space to be known apriori. In real-world applications
however, it is desirable to automatically discover the groups of related tasks
that share a feature space. In this paper we aim at searching the exponentially
large space of all possible groups of tasks that may share a feature space. The
main contribution is a convex formulation that employs a graph-based
regularizer and simultaneously discovers few groups of related tasks, having
close-by task parameters, as well as the feature space shared within each
group. The regularizer encodes an important structure among the groups of tasks
leading to an efficient algorithm for solving it: if there is no feature space
under which a group of tasks has close-by task parameters, then there does not
exist such a feature space for any of its supersets. An efficient active set
algorithm that exploits this simplification and performs a clever search in the
exponentially large space is presented. The algorithm is guaranteed to solve
the proposed formulation (within some precision) in a time polynomial in the
number of groups of related tasks discovered. Empirical results on benchmark
datasets show that the proposed formulation achieves good generalization and
outperforms state-of-the-art multi-task learning algorithms in some cases.Comment: ICML201
Visual Representations: Defining Properties and Deep Approximations
Visual representations are defined in terms of minimal sufficient statistics
of visual data, for a class of tasks, that are also invariant to nuisance
variability. Minimal sufficiency guarantees that we can store a representation
in lieu of raw data with smallest complexity and no performance loss on the
task at hand. Invariance guarantees that the statistic is constant with respect
to uninformative transformations of the data. We derive analytical expressions
for such representations and show they are related to feature descriptors
commonly used in computer vision, as well as to convolutional neural networks.
This link highlights the assumptions and approximations tacitly assumed by
these methods and explains empirical practices such as clamping, pooling and
joint normalization.Comment: UCLA CSD TR140023, Nov. 12, 2014, revised April 13, 2015, November
13, 2015, February 28, 201
Classification with Asymmetric Label Noise: Consistency and Maximal Denoising
In many real-world classification problems, the labels of training examples
are randomly corrupted. Most previous theoretical work on classification with
label noise assumes that the two classes are separable, that the label noise is
independent of the true class label, or that the noise proportions for each
class are known. In this work, we give conditions that are necessary and
sufficient for the true class-conditional distributions to be identifiable.
These conditions are weaker than those analyzed previously, and allow for the
classes to be nonseparable and the noise levels to be asymmetric and unknown.
The conditions essentially state that a majority of the observed labels are
correct and that the true class-conditional distributions are "mutually
irreducible," a concept we introduce that limits the similarity of the two
distributions. For any label noise problem, there is a unique pair of true
class-conditional distributions satisfying the proposed conditions, and we
argue that this pair corresponds in a certain sense to maximal denoising of the
observed distributions.
Our results are facilitated by a connection to "mixture proportion
estimation," which is the problem of estimating the maximal proportion of one
distribution that is present in another. We establish a novel rate of
convergence result for mixture proportion estimation, and apply this to obtain
consistency of a discrimination rule based on surrogate loss minimization.
Experimental results on benchmark data and a nuclear particle classification
problem demonstrate the efficacy of our approach
Robust Loss Functions under Label Noise for Deep Neural Networks
In many applications of classifier learning, training data suffers from label
noise. Deep networks are learned using huge training data where the problem of
noisy labels is particularly relevant. The current techniques proposed for
learning deep networks under label noise focus on modifying the network
architecture and on algorithms for estimating true labels from noisy labels. An
alternate approach would be to look for loss functions that are inherently
noise-tolerant. For binary classification there exist theoretical results on
loss functions that are robust to label noise. In this paper, we provide some
sufficient conditions on a loss function so that risk minimization under that
loss function would be inherently tolerant to label noise for multiclass
classification problems. These results generalize the existing results on
noise-tolerant loss functions for binary classification. We study some of the
widely used loss functions in deep networks and show that the loss function
based on mean absolute value of error is inherently robust to label noise. Thus
standard back propagation is enough to learn the true classifier even under
label noise. Through experiments, we illustrate the robustness of risk
minimization with such loss functions for learning neural networks.Comment: Appeared in AAAI 201
Phase transitions and configuration space topology
Equilibrium phase transitions may be defined as nonanalytic points of
thermodynamic functions, e.g., of the canonical free energy. Given a certain
physical system, it is of interest to understand which properties of the system
account for the presence of a phase transition, and an understanding of these
properties may lead to a deeper understanding of the physical phenomenon. One
possible approach of this issue, reviewed and discussed in the present paper,
is the study of topology changes in configuration space which, remarkably, are
found to be related to equilibrium phase transitions in classical statistical
mechanical systems. For the study of configuration space topology, one
considers the subsets M_v, consisting of all points from configuration space
with a potential energy per particle equal to or less than a given v. For
finite systems, topology changes of M_v are intimately related to nonanalytic
points of the microcanonical entropy (which, as a surprise to many, do exist).
In the thermodynamic limit, a more complex relation between nonanalytic points
of thermodynamic functions (i.e., phase transitions) and topology changes is
observed. For some class of short-range systems, a topology change of the M_v
at v=v_t was proved to be necessary for a phase transition to take place at a
potential energy v_t. In contrast, phase transitions in systems with long-range
interactions or in systems with non-confining potentials need not be
accompanied by such a topology change. Instead, for such systems the
nonanalytic point in a thermodynamic function is found to have some
maximization procedure at its origin. These results may foster insight into the
mechanisms which lead to the occurrence of a phase transition, and thus may
help to explore the origin of this physical phenomenon.Comment: 22 pages, 6 figure
- …