373 research outputs found
Plex: Towards Reliability using Pretrained Large Model Extensions
A recent trend in artificial intelligence is the use of pretrained models for
language and vision tasks, which have achieved extraordinary performance but
also puzzling failures. Probing these models' abilities in diverse ways is
therefore critical to the field. In this paper, we explore the reliability of
models, where we define a reliable model as one that not only achieves strong
predictive performance but also performs well consistently over many
decision-making tasks involving uncertainty (e.g., selective prediction, open
set recognition), robust generalization (e.g., accuracy and proper scoring
rules such as log-likelihood on in- and out-of-distribution datasets), and
adaptation (e.g., active learning, few-shot uncertainty). We devise 10 types of
tasks over 40 datasets in order to evaluate different aspects of reliability on
both vision and language domains. To improve reliability, we developed ViT-Plex
and T5-Plex, pretrained large model extensions for vision and language
modalities, respectively. Plex greatly improves the state-of-the-art across
reliability tasks, and simplifies the traditional protocol as it improves the
out-of-the-box performance and does not require designing scores or tuning the
model for each task. We demonstrate scaling effects over model sizes up to 1B
parameters and pretraining dataset sizes up to 4B examples. We also demonstrate
Plex's capabilities on challenging tasks including zero-shot open set
recognition, active learning, and uncertainty in conversational language
understanding.Comment: Code available at https://goo.gle/plex-cod
Mitigating Representation Bias in Action Recognition: Algorithms and Benchmarks
Deep learning models have achieved excellent recognition results on
large-scale video benchmarks. However, they perform poorly when applied to
videos with rare scenes or objects, primarily due to the bias of existing video
datasets. We tackle this problem from two different angles: algorithm and
dataset. From the perspective of algorithms, we propose Spatial-aware
Multi-Aspect Debiasing (SMAD), which incorporates both explicit debiasing with
multi-aspect adversarial training and implicit debiasing with the spatial
actionness reweighting module, to learn a more generic representation invariant
to non-action aspects. To neutralize the intrinsic dataset bias, we propose
OmniDebias to leverage web data for joint training selectively, which can
achieve higher performance with far fewer web data. To verify the
effectiveness, we establish evaluation protocols and perform extensive
experiments on both re-distributed splits of existing datasets and a new
evaluation dataset focusing on the action with rare scenes. We also show that
the debiased representation can generalize better when transferred to other
datasets and tasks.Comment: ECCVW 202
- …