77 research outputs found
Label Budget Allocation in Multi-Task Learning
The cost of labeling data often limits the performance of machine learning
systems. In multi-task learning, related tasks provide information to each
other and improve overall performance, but the label cost can vary among tasks.
How should the label budget (i.e. the amount of money spent on labeling) be
allocated among different tasks to achieve optimal multi-task performance? We
are the first to propose and formally define the label budget allocation
problem in multi-task learning and to empirically show that different budget
allocation strategies make a big difference to its performance. We propose a
Task-Adaptive Budget Allocation algorithm to robustly generate the optimal
budget allocation adaptive to different multi-task learning settings.
Specifically, we estimate and then maximize the extent of new information
obtained from the allocated budget as a proxy for multi-task learning
performance. Experiments on PASCAL VOC and Taskonomy demonstrate the efficacy
of our approach over other widely used heuristic labeling strategies
DIME-FM: DIstilling Multimodal and Efficient Foundation Models
Large Vision-Language Foundation Models (VLFM), such as CLIP, ALIGN and
Florence, are trained on large-scale datasets of image-caption pairs and
achieve superior transferability and robustness on downstream tasks, but they
are difficult to use in many practical applications due to their large size,
high latency and fixed architectures. Unfortunately, recent work shows training
a small custom VLFM for resource-limited applications is currently very
difficult using public and smaller-scale data. In this paper, we introduce a
new distillation mechanism (DIME-FM) that allows us to transfer the knowledge
contained in large VLFMs to smaller, customized foundation models using a
relatively small amount of inexpensive, unpaired images and sentences. We
transfer the knowledge from the pre-trained CLIP-ViTL/14 model to a ViT-B/32
model, with only 40M public images and 28.4M unpaired public sentences. The
resulting model "Distill-ViT-B/32" rivals the CLIP-ViT-B/32 model pre-trained
on its private WiT dataset (400M image-text pairs): Distill-ViT-B/32 achieves
similar results in terms of zero-shot and linear-probing performance on both
ImageNet and the ELEVATER (20 image classification tasks) benchmarks. It also
displays comparable robustness when evaluated on five datasets with natural
distribution shifts from ImageNet
- …