46,786 research outputs found
StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipation
Anticipation problem has been studied considering different aspects such as
predicting humans' locations, predicting hands and objects trajectories, and
forecasting actions and human-object interactions. In this paper, we studied
the short-term object interaction anticipation problem from the egocentric
point of view, proposing a new end-to-end architecture named StillFast. Our
approach simultaneously processes a still image and a video detecting and
localizing next-active objects, predicting the verb which describes the future
interaction and determining when the interaction will start. Experiments on the
large-scale egocentric dataset EGO4D show that our method outperformed
state-of-the-art approaches on the considered task. Our method is ranked first
in the public leaderboard of the EGO4D short term object interaction
anticipation challenge 2022. Please see the project web page for code and
additional details: https://iplab.dmi.unict.it/stillfast/
Is First Person Vision Challenging for Object Tracking?
Understanding human-object interactions is fundamental in First Person Vision
(FPV). Tracking algorithms which follow the objects manipulated by the camera
wearer can provide useful cues to effectively model such interactions. Visual
tracking solutions available in the computer vision literature have
significantly improved their performance in the last years for a large variety
of target objects and tracking scenarios. However, despite a few previous
attempts to exploit trackers in FPV applications, a methodical analysis of the
performance of state-of-the-art trackers in this domain is still missing. In
this paper, we fill the gap by presenting the first systematic study of object
tracking in FPV. Our study extensively analyses the performance of recent
visual trackers and baseline FPV trackers with respect to different aspects and
considering a new performance measure. This is achieved through TREK-150, a
novel benchmark dataset composed of 150 densely annotated video sequences. Our
results show that object tracking in FPV is challenging, which suggests that
more research efforts should be devoted to this problem so that tracking could
benefit FPV tasks.Comment: IEEE/CVF International Conference on Computer Vision (ICCV) 2021,
Visual Object Tracking Challenge VOT2021 workshop. arXiv admin note: text
overlap with arXiv:2011.1226
Visual Object Tracking in First Person Vision
The understanding of human-object interactions is fundamental in First Person Vision (FPV). Visual tracking algorithms which follow the objects manipulated by the camera wearer can provide useful information to effectively model such interactions. In the last years, the computer vision community has significantly improved the performance of tracking algorithms for a large variety of target objects and scenarios. Despite a few previous attempts to exploit trackers in the FPV domain, a methodical analysis of the performance of state-of-the-art trackers is still missing. This research gap raises the question of whether current solutions can be used âoff-the-shelfâ or more domain-specific investigations should be carried out. This paper aims to provide answers to such questions. We present the first systematic investigation of single object tracking in FPV. Our study extensively analyses the performance of 42 algorithms including generic object trackers and baseline FPV-specific trackers. The analysis is carried out by focusing on different aspects of the FPV setting, introducing new performance measures, and in relation to FPV-specific tasks. The study is made possible through the introduction of TREK-150, a novel benchmark dataset composed of 150 densely annotated video sequences. Our results show that object tracking in FPV poses new challenges to current visual trackers. We highlight the factors causing such behavior and point out possible research directions. Despite their difficulties, we prove that trackers bring benefits to FPV downstream tasks requiring short-term object tracking. We expect that generic object tracking will gain popularity in FPV as new and FPV-specific methodologies are investigated
Human-Object Interaction Prediction in Videos through Gaze Following
Understanding the human-object interactions (HOIs) from a video is essential
to fully comprehend a visual scene. This line of research has been addressed by
detecting HOIs from images and lately from videos. However, the video-based HOI
anticipation task in the third-person view remains understudied. In this paper,
we design a framework to detect current HOIs and anticipate future HOIs in
videos. We propose to leverage human gaze information since people often fixate
on an object before interacting with it. These gaze features together with the
scene contexts and the visual appearances of human-object pairs are fused
through a spatio-temporal transformer. To evaluate the model in the HOI
anticipation task in a multi-person scenario, we propose a set of person-wise
multi-label metrics. Our model is trained and validated on the VidHOI dataset,
which contains videos capturing daily life and is currently the largest video
HOI dataset. Experimental results in the HOI detection task show that our
approach improves the baseline by a great margin of 36.3% relatively. Moreover,
we conduct an extensive ablation study to demonstrate the effectiveness of our
modifications and extensions to the spatio-temporal transformer. Our code is
publicly available on https://github.com/nizhf/hoi-prediction-gaze-transformer.Comment: Accepted by CVIU https://doi.org/10.1016/j.cviu.2023.10374
EGO-TOPO: Environment Affordances from Egocentric Video
First-person video naturally brings the use of a physical environment to the
forefront, since it shows the camera wearer interacting fluidly in a space
based on his intentions. However, current methods largely separate the observed
actions from the persistent space itself. We introduce a model for environment
affordances that is learned directly from egocentric video. The main idea is to
gain a human-centric model of a physical space (such as a kitchen) that
captures (1) the primary spatial zones of interaction and (2) the likely
activities they support. Our approach decomposes a space into a topological map
derived from first-person activity, organizing an ego-video into a series of
visits to the different zones. Further, we show how to link zones across
multiple related environments (e.g., from videos of multiple kitchens) to
obtain a consolidated representation of environment functionality. On
EPIC-Kitchens and EGTEA+, we demonstrate our approach for learning scene
affordances and anticipating future actions in long-form video.Comment: Published in CVPR 2020, project page:
http://vision.cs.utexas.edu/projects/ego-topo
Action-oriented Scene Understanding
In order to allow robots to act autonomously it is crucial that they do not only describe their environment accurately but also identify how to interact with their surroundings.
While we witnessed tremendous progress in descriptive computer vision, approaches that explicitly target action are scarcer.
This cumulative dissertation approaches the goal of interpreting visual scenes âin the wildâ with respect to actions implied by the scene. We call this approach action-oriented scene understanding. It involves identifying and judging opportunities for interaction with constituents of the scene (e.g. objects and their parts) as well as understanding object functions and how interactions will impact the future. All of these aspects are addressed on three levels of abstraction: elements, perception and reasoning.
On the elementary level, we investigate semantic and functional grouping of objects by analyzing annotated natural image scenes. We compare object label-based and visual context definitions with respect to their suitability for generating meaningful object class representations. Our findings suggest that representations generated from visual context are on-par in terms of semantic quality with those generated from large quantities of text.
The perceptive level concerns action identification. We propose a system to identify possible interactions for robots and humans with the environment (affordances) on a pixel level using state-of-the-art machine learning methods. Pixel-wise part annotations of images are transformed into 12 affordance maps. Using these maps, a convolutional neural network is trained to densely predict affordance maps from unknown RGB images. In contrast to previous work, this approach operates exclusively on RGB images during both, training and testing, and yet achieves state-of-the-art performance.
At the reasoning level, we extend the question from asking what actions are possible to what actions are plausible. For this, we gathered a dataset of household images associated with human ratings of the likelihoods of eight different actions. Based on the judgement provided by the human raters, we train convolutional neural networks to generate plausibility scores from unseen images.
Furthermore, having considered only static scenes previously in this thesis, we propose a system that takes video input and predicts plausible future actions. Since this requires careful identification of relevant features in the video sequence, we analyze this particular aspect in detail using a synthetic dataset for several state-of-the-art video models. We identify feature learning as a major obstacle for anticipation in natural video data.
The presented projects analyze the role of action in scene understanding from various angles and in multiple settings while highlighting the advantages of assuming an action-oriented perspective.
We conclude that action-oriented scene understanding can augment classic computer vision in many real-life applications, in particular robotics
Memory-and-Anticipation Transformer for Online Action Understanding
Most existing forecasting systems are memory-based methods, which attempt to
mimic human forecasting ability by employing various memory mechanisms and have
progressed in temporal modeling for memory dependency. Nevertheless, an obvious
weakness of this paradigm is that it can only model limited historical
dependence and can not transcend the past. In this paper, we rethink the
temporal dependence of event evolution and propose a novel
memory-anticipation-based paradigm to model an entire temporal structure,
including the past, present, and future. Based on this idea, we present
Memory-and-Anticipation Transformer (MAT), a memory-anticipation-based
approach, to address the online action detection and anticipation tasks. In
addition, owing to the inherent superiority of MAT, it can process online
action detection and anticipation tasks in a unified manner. The proposed MAT
model is tested on four challenging benchmarks TVSeries, THUMOS'14, HDD, and
EPIC-Kitchens-100, for online action detection and anticipation tasks, and it
significantly outperforms all existing methods. Code is available at
https://github.com/Echo0125/Memory-and-Anticipation-Transformer.Comment: ICCV 2023 Camera Read
- âŠ