5,104 research outputs found
EgoTaskQA: Understanding Human Tasks in Egocentric Videos
Understanding human tasks through video observations is an essential
capability of intelligent agents. The challenges of such capability lie in the
difficulty of generating a detailed understanding of situated actions, their
effects on object states (i.e., state changes), and their causal dependencies.
These challenges are further aggravated by the natural parallelism from
multi-tasking and partial observations in multi-agent collaboration. Most prior
works leverage action localization or future prediction as an indirect metric
for evaluating such task understanding from videos. To make a direct
evaluation, we introduce the EgoTaskQA benchmark that provides a single home
for the crucial dimensions of task understanding through question-answering on
real-world egocentric videos. We meticulously design questions that target the
understanding of (1) action dependencies and effects, (2) intents and goals,
and (3) agents' beliefs about others. These questions are divided into four
types, including descriptive (what status?), predictive (what will?),
explanatory (what caused?), and counterfactual (what if?) to provide diagnostic
analyses on spatial, temporal, and causal understandings of goal-oriented
tasks. We evaluate state-of-the-art video reasoning models on our benchmark and
show their significant gaps between humans in understanding complex
goal-oriented egocentric videos. We hope this effort will drive the vision
community to move onward with goal-oriented video understanding and reasoning.Comment: Published at NeurIPS Track on Datasets and Benchmarks 202
Egocentric Activity Recognition Using HOG, HOF and MBH Features
recognizing egocentric actions is a challenging task that has to be addressed in recent years. The recognition of first person activities helps in assisting elderly people, disabled patients and so on. Here, life logging activity videos are taken as input. There are 2 categories, first one is the top level and second one is second level. In this research work, the recognition is done using the features like Histogram of Oriented Gradients (HOG), Histogram of optical Flow (HOF) and Motion Boundary Histogram (MBH). The extracted features are given as input to the classifiers like Support Vector Machine (SVM) and k Nearest Neighbor (kNN). The performance results showed that SVM gave better results than kNN classifier for both categories
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
Large foundation models can exhibit unique capabilities depending on the
domain of data they are trained on. While these domains are generic, they may
only barely overlap. For example, visual-language models (VLMs) are trained on
Internet-scale image captions, but large language models (LMs) are further
trained on Internet-scale text with no images (e.g. from spreadsheets, to SAT
questions). As a result, these models store different forms of commonsense
knowledge across different domains. In this work, we show that this model
diversity is symbiotic, and can be leveraged to build AI systems with
structured Socratic dialogue -- in which new multimodal tasks are formulated as
a guided language-based exchange between different pre-existing foundation
models, without additional finetuning. In the context of egocentric perception,
we present a case study of Socratic Models (SMs) that can provide meaningful
results for complex tasks such as generating free-form answers to contextual
questions about egocentric video, by formulating video Q&A as short story Q&A,
i.e. summarizing the video into a short story, then answering questions about
it. Additionally, SMs can generate captions for Internet images, and are
competitive with state-of-the-art on zero-shot video-to-text retrieval with
42.8 R@1 on MSR-VTT 1k-A. SMs demonstrate how to compose foundation models
zero-shot to capture new multimodal functionalities, without domain-specific
data collection. Prototypes are available at socraticmodels.github.io.Comment: https://socraticmodels.github.io
An Outlook into the Future of Egocentric Vision
What will the future be? We wonder! In this survey, we explore the gap
between current research in egocentric vision and the ever-anticipated future,
where wearable computing, with outward facing cameras and digital overlays, is
expected to be integrated in our every day lives. To understand this gap, the
article starts by envisaging the future through character-based stories,
showcasing through examples the limitations of current technology. We then
provide a mapping between this future and previously defined research tasks.
For each task, we survey its seminal works, current state-of-the-art
methodologies and available datasets, then reflect on shortcomings that limit
its applicability to future research. Note that this survey focuses on software
models for egocentric vision, independent of any specific hardware. The paper
concludes with recommendations for areas of immediate explorations so as to
unlock our path to the future always-on, personalised and life-enhancing
egocentric vision.Comment: We invite comments, suggestions and corrections here:
https://openreview.net/forum?id=V3974SUk1
NaQ: Leveraging Narrations as Queries to Supervise Episodic Memory
Searching long egocentric videos with natural language queries (NLQ) has
compelling applications in augmented reality and robotics, where a fluid index
into everything that a person (agent) has seen before could augment human
memory and surface relevant information on demand. However, the structured
nature of the learning problem (free-form text query inputs, localized video
temporal window outputs) and its needle-in-a-haystack nature makes it both
technically challenging and expensive to supervise. We introduce
Narrations-as-Queries (NaQ), a data augmentation strategy that transforms
standard video-text narrations into training data for a video query
localization model. Validating our idea on the Ego4D benchmark, we find it has
tremendous impact in practice. NaQ improves multiple top models by substantial
margins (even doubling their accuracy), and yields the very best results to
date on the Ego4D NLQ challenge, soundly outperforming all challenge winners in
the CVPR and ECCV 2022 competitions and topping the current public leaderboard.
Beyond achieving the state-of-the-art for NLQ, we also demonstrate unique
properties of our approach such as the ability to perform zero-shot and
few-shot NLQ, and improved performance on queries about long-tail object
categories. Code and models:
{\small\url{http://vision.cs.utexas.edu/projects/naq}}.Comment: 13 pages, 7 figures, appearing in CVPR 202
- …