54,185 research outputs found
Pruning training sets for learning of object categories
Training datasets for learning of object categories are often contaminated or imperfect. We explore an approach to automatically identify examples that are noisy or troublesome for learning and exclude them from the training set. The problem is relevant to learning in semi-supervised or unsupervised setting, as well as to learning when the training data is contaminated with wrongly labeled examples or when correctly labeled, but hard to learn examples, are present. We propose a fully automatic mechanism for noise cleaning, called âdata pruningâ, and demonstrate its success on learning of human faces. It is not assumed that the data or the noise can be modeled or that additional training examples are available. Our experiments show that data pruning can improve on generalization performance for algorithms with various robustness to noise. It outperforms methods with regularization properties and is superior to commonly applied aggregation methods, such as bagging
Learning to Rank from Samples of Variable Quality
Training deep neural networks requires many training samples, but in
practice, training labels are expensive to obtain and may be of varying
quality, as some may be from trusted expert labelers while others might be from
heuristics or other sources of weak supervision such as crowd-sourcing. This
creates a fundamental quality-versus quantity trade-off in the learning
process. Do we learn from the small amount of high-quality data or the
potentially large amount of weakly-labeled data? We argue that if the learner
could somehow know and take the label-quality into account when learning the
data representation, we could get the best of both worlds. To this end, we
introduce "fidelity-weighted learning" (FWL), a semi-supervised student-teacher
approach for training deep neural networks using weakly-labeled data. FWL
modulates the parameter updates to a student network (trained on the task we
care about) on a per-sample basis according to the posterior confidence of its
label-quality estimated by a teacher (who has access to the high-quality
labels). Both student and teacher are learned from the data. We evaluate FWL on
document ranking where we outperform state-of-the-art alternative
semi-supervised methods.Comment: Presented at The First International SIGIR2016 Workshop on Learning
From Limited Or Noisy Data For Information Retrieval. arXiv admin note:
substantial text overlap with arXiv:1711.0279
Distant Learning for Entity Linking with Automatic Noise Detection
Accurate entity linkers have been produced for domains and languages where
annotated data (i.e., texts linked to a knowledge base) is available. However,
little progress has been made for the settings where no or very limited amounts
of labeled data are present (e.g., legal or most scientific domains). In this
work, we show how we can learn to link mentions without having any labeled
examples, only a knowledge base and a collection of unannotated texts from the
corresponding domain. In order to achieve this, we frame the task as a
multi-instance learning problem and rely on surface matching to create initial
noisy labels. As the learning signal is weak and our surrogate labels are
noisy, we introduce a noise detection component in our model: it lets the model
detect and disregard examples which are likely to be noisy. Our method, jointly
learning to detect noise and link entities, greatly outperforms the surface
matching baseline. For a subset of entity categories, it even approaches the
performance of supervised learning.Comment: ACL 201
- âŚ