104,525 research outputs found
EgoTaskQA: Understanding Human Tasks in Egocentric Videos
Understanding human tasks through video observations is an essential
capability of intelligent agents. The challenges of such capability lie in the
difficulty of generating a detailed understanding of situated actions, their
effects on object states (i.e., state changes), and their causal dependencies.
These challenges are further aggravated by the natural parallelism from
multi-tasking and partial observations in multi-agent collaboration. Most prior
works leverage action localization or future prediction as an indirect metric
for evaluating such task understanding from videos. To make a direct
evaluation, we introduce the EgoTaskQA benchmark that provides a single home
for the crucial dimensions of task understanding through question-answering on
real-world egocentric videos. We meticulously design questions that target the
understanding of (1) action dependencies and effects, (2) intents and goals,
and (3) agents' beliefs about others. These questions are divided into four
types, including descriptive (what status?), predictive (what will?),
explanatory (what caused?), and counterfactual (what if?) to provide diagnostic
analyses on spatial, temporal, and causal understandings of goal-oriented
tasks. We evaluate state-of-the-art video reasoning models on our benchmark and
show their significant gaps between humans in understanding complex
goal-oriented egocentric videos. We hope this effort will drive the vision
community to move onward with goal-oriented video understanding and reasoning.Comment: Published at NeurIPS Track on Datasets and Benchmarks 202
On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law
Out-of-distribution (OOD) testing is increasingly popular for evaluating a
machine learning system's ability to generalize beyond the biases of a training
set. OOD benchmarks are designed to present a different joint distribution of
data and labels between training and test time. VQA-CP has become the standard
OOD benchmark for visual question answering, but we discovered three troubling
practices in its current use. First, most published methods rely on explicit
knowledge of the construction of the OOD splits. They often rely on
``inverting'' the distribution of labels, e.g. answering mostly 'yes' when the
common training answer is 'no'. Second, the OOD test set is used for model
selection. Third, a model's in-domain performance is assessed after retraining
it on in-domain splits (VQA v2) that exhibit a more balanced distribution of
labels. These three practices defeat the objective of evaluating
generalization, and put into question the value of methods specifically
designed for this dataset. We show that embarrassingly-simple methods,
including one that generates answers at random, surpass the state of the art on
some question types. We provide short- and long-term solutions to avoid these
pitfalls and realize the benefits of OOD evaluation
Where does good evidence come from?
This paper started as a debate between the two authors. Both authors present a series of propositions about quality standards in education research. Cook’s propositions, as might be expected, concern the importance of experimental trials for establishing the security of causal evidence, but they also include some important practical and acceptable alternatives such as regression discontinuity analysis. Gorard’s propositions, again as might be expected, tend to place experimental trials within a larger mixed method sequence of research activities, treating them as important but without giving them primacy. The paper concludes with a synthesis of these ideas, summarising the many areas of agreement and clarifying the few areas of disagreement. The latter include what proportion of available research funds should be devoted to trials, how urgent the need for more trials is, and whether the call for more truly mixed methods work requires a major shift in the community
- …