2,498 research outputs found
Enhancing Next Active Object-based Egocentric Action Anticipation with Guided Attention
Short-term action anticipation (STA) in first-person videos is a challenging
task that involves understanding the next active object interactions and
predicting future actions. Existing action anticipation methods have primarily
focused on utilizing features extracted from video clips, but often overlooked
the importance of objects and their interactions. To this end, we propose a
novel approach that applies a guided attention mechanism between the objects,
and the spatiotemporal features extracted from video clips, enhancing the
motion and contextual information, and further decoding the object-centric and
motion-centric information to address the problem of STA in egocentric videos.
Our method, GANO (Guided Attention for Next active Objects) is a multi-modal,
end-to-end, single transformer-based network. The experimental results
performed on the largest egocentric dataset demonstrate that GANO outperforms
the existing state-of-the-art methods for the prediction of the next active
object label, its bounding box location, the corresponding future action, and
the time to contact the object. The ablation study shows the positive
contribution of the guided attention mechanism compared to other fusion
methods. Moreover, it is possible to improve the next active object location
and class label prediction results of GANO by just appending the learnable
object tokens with the region of interest embeddings.Comment: Accepted to IEEE ICIP 2023, see project page here :
https://sanketsans.github.io/guided-attention-egocentric.htm
Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems
Predicting the future location of vehicles is essential for safety-critical
applications such as advanced driver assistance systems (ADAS) and autonomous
driving. This paper introduces a novel approach to simultaneously predict both
the location and scale of target vehicles in the first-person (egocentric) view
of an ego-vehicle. We present a multi-stream recurrent neural network (RNN)
encoder-decoder model that separately captures both object location and scale
and pixel-level observations for future vehicle localization. We show that
incorporating dense optical flow improves prediction results significantly
since it captures information about motion as well as appearance change. We
also find that explicitly modeling future motion of the ego-vehicle improves
the prediction accuracy, which could be especially beneficial in intelligent
and automated vehicles that have motion planning capability. To evaluate the
performance of our approach, we present a new dataset of first-person videos
collected from a variety of scenarios at road intersections, which are
particularly challenging moments for prediction because vehicle trajectories
are diverse and dynamic.Comment: To appear on ICRA 201
Future Person Localization in First-Person Videos
We present a new task that predicts future locations of people observed in
first-person videos. Consider a first-person video stream continuously recorded
by a wearable camera. Given a short clip of a person that is extracted from the
complete stream, we aim to predict that person's location in future frames. To
facilitate this future person localization ability, we make the following three
key observations: a) First-person videos typically involve significant
ego-motion which greatly affects the location of the target person in future
frames; b) Scales of the target person act as a salient cue to estimate a
perspective effect in first-person videos; c) First-person videos often capture
people up-close, making it easier to leverage target poses (e.g., where they
look) for predicting their future locations. We incorporate these three
observations into a prediction framework with a multi-stream
convolution-deconvolution architecture. Experimental results reveal our method
to be effective on our new dataset as well as on a public social interaction
dataset.Comment: Accepted to CVPR 201
LSTA: Long Short-Term Attention for Egocentric Action Recognition
Egocentric activity recognition is one of the most challenging tasks in video
analysis. It requires a fine-grained discrimination of small objects and their
manipulation. While some methods base on strong supervision and attention
mechanisms, they are either annotation consuming or do not take spatio-temporal
patterns into account. In this paper we propose LSTA as a mechanism to focus on
features from spatial relevant parts while attention is being tracked smoothly
across the video sequence. We demonstrate the effectiveness of LSTA on
egocentric activity recognition with an end-to-end trainable two-stream
architecture, achieving state of the art performance on four standard
benchmarks.Comment: Accepted to CVPR 201
- …