20,778 research outputs found
Training Complex Models with Multi-Task Weak Supervision
As machine learning models continue to increase in complexity, collecting
large hand-labeled training sets has become one of the biggest roadblocks in
practice. Instead, weaker forms of supervision that provide noisier but cheaper
labels are often used. However, these weak supervision sources have diverse and
unknown accuracies, may output correlated labels, and may label different tasks
or apply at different levels of granularity. We propose a framework for
integrating and modeling such weak supervision sources by viewing them as
labeling different related sub-tasks of a problem, which we refer to as the
multi-task weak supervision setting. We show that by solving a matrix
completion-style problem, we can recover the accuracies of these multi-task
sources given their dependency structure, but without any labeled data, leading
to higher-quality supervision for training an end model. Theoretically, we show
that the generalization error of models trained with this approach improves
with the number of unlabeled data points, and characterize the scaling with
respect to the task and dependency structures. On three fine-grained
classification problems, we show that our approach leads to average gains of
20.2 points in accuracy over a traditional supervised approach, 6.8 points over
a majority vote baseline, and 4.1 points over a previously proposed weak
supervision method that models tasks separately
- …