429 research outputs found
Trespassing the Boundaries: Labeling Temporal Bounds for Object Interactions in Egocentric Video
Manual annotations of temporal bounds for object interactions (i.e. start and
end times) are typical training input to recognition, localization and
detection algorithms. For three publicly available egocentric datasets, we
uncover inconsistencies in ground truth temporal bounds within and across
annotators and datasets. We systematically assess the robustness of
state-of-the-art approaches to changes in labeled temporal bounds, for object
interaction recognition. As boundaries are trespassed, a drop of up to 10% is
observed for both Improved Dense Trajectories and Two-Stream Convolutional
Neural Network.
We demonstrate that such disagreement stems from a limited understanding of
the distinct phases of an action, and propose annotating based on the Rubicon
Boundaries, inspired by a similarly named cognitive model, for consistent
temporal bounds of object interactions. Evaluated on a public dataset, we
report a 4% increase in overall accuracy, and an increase in accuracy for 55%
of classes when Rubicon Boundaries are used for temporal annotations.Comment: ICCV 201
Seeing and hearing egocentric actions: how much can we learn?
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Our interaction with the world is an inherently multi-modal experience. However, the understanding of human-to-object interactions has historically been addressed focusing on a single modality. In particular, a limited number of works have considered to integrate the visual and audio modalities for this purpose. In this work, we propose a multimodal approach for egocentric action recognition in a kitchen environment that relies on audio and visual information. Our model combines a sparse temporal sampling strategy with a late fusion of audio, spatial,and temporal streams. Experimental results on the EPIC-Kitchens dataset show that multimodal integration leads to better performance than unimodal approaches. In particular, we achieved a5.18%improvement over the state of the art on verb classification.Peer ReviewedPostprint (author's final draft
Multitask Learning to Improve Egocentric Action Recognition
In this work we employ multitask learning to capitalize on the structure that
exists in related supervised tasks to train complex neural networks. It allows
training a network for multiple objectives in parallel, in order to improve
performance on at least one of them by capitalizing on a shared representation
that is developed to accommodate more information than it otherwise would for a
single task. We employ this idea to tackle action recognition in egocentric
videos by introducing additional supervised tasks. We consider learning the
verbs and nouns from which action labels consist of and predict coordinates
that capture the hand locations and the gaze-based visual saliency for all the
frames of the input video segments. This forces the network to explicitly focus
on cues from secondary tasks that it might otherwise have missed resulting in
improved inference. Our experiments on EPIC-Kitchens and EGTEA Gaze+ show
consistent improvements when training with multiple tasks over the single-task
baseline. Furthermore, in EGTEA Gaze+ we outperform the state-of-the-art in
action recognition by 3.84%. Apart from actions, our method produces accurate
hand and gaze estimations as side tasks, without requiring any additional input
at test time other than the RGB video clips.Comment: 10 pages, 3 figures, accepted at the 5th Egocentric Perception,
Interaction and Computing (EPIC) workshop at ICCV 2019, code repository:
https://github.com/georkap/hand_track_classificatio
Activities of Daily Living Monitoring via a WearableCamera: Toward Real-World Applications
Activity recognition from wearable photo-cameras is crucial for lifestyle characterization and health monitoring. However, to enable its wide-spreading use in real-world applications, a high level of generalization needs to be ensured on unseen users. Currently, state-of-the-art methods have been tested only on relatively small datasets consisting of data collected by a few users that are partially seen during training. In this paper, we built a new egocentric dataset acquired by 15 people through a wearable photo-camera and used it to test the generalization capabilities of several state-of-the-art methods for egocentric activity recognition on unseen users and daily image sequences. In addition, we propose several variants to state-of-the-art deep learning architectures, and we show that it is possible to achieve 79.87% accuracy on users unseen during training. Furthermore, to show that the proposed dataset and approach can be useful in real-world applications, where data can be acquired by different wearable cameras and labeled data are scarcely available, we employed a domain adaptation strategy on two egocentric activity recognition benchmark datasets. These experiments show that the model learned with our dataset, can easily be transferred to other domains with a very small amount of labeled data. Taken together, those results show that activity recognition from wearable photo-cameras is mature enough to be tested in real-world applications
- …