27,980 research outputs found
Egocentric Hand Detection Via Dynamic Region Growing
Egocentric videos, which mainly record the activities carried out by the
users of the wearable cameras, have drawn much research attentions in recent
years. Due to its lengthy content, a large number of ego-related applications
have been developed to abstract the captured videos. As the users are
accustomed to interacting with the target objects using their own hands while
their hands usually appear within their visual fields during the interaction,
an egocentric hand detection step is involved in tasks like gesture
recognition, action recognition and social interaction understanding. In this
work, we propose a dynamic region growing approach for hand region detection in
egocentric videos, by jointly considering hand-related motion and egocentric
cues. We first determine seed regions that most likely belong to the hand, by
analyzing the motion patterns across successive frames. The hand regions can
then be located by extending from the seed regions, according to the scores
computed for the adjacent superpixels. These scores are derived from four
egocentric cues: contrast, location, position consistency and appearance
continuity. We discuss how to apply the proposed method in real-life scenarios,
where multiple hands irregularly appear and disappear from the videos.
Experimental results on public datasets show that the proposed method achieves
superior performance compared with the state-of-the-art methods, especially in
complicated scenarios
Unsupervised Segmentation of Action Segments in Egocentric Videos using Gaze
Unsupervised segmentation of action segments in egocentric videos is a
desirable feature in tasks such as activity recognition and content-based video
retrieval. Reducing the search space into a finite set of action segments
facilitates a faster and less noisy matching. However, there exist a
substantial gap in machine understanding of natural temporal cuts during a
continuous human activity. This work reports on a novel gaze-based approach for
segmenting action segments in videos captured using an egocentric camera. Gaze
is used to locate the region-of-interest inside a frame. By tracking two simple
motion-based parameters inside successive regions-of-interest, we discover a
finite set of temporal cuts. We present several results using combinations (of
the two parameters) on a dataset, i.e., BRISGAZE-ACTIONS. The dataset contains
egocentric videos depicting several daily-living activities. The quality of the
temporal cuts is further improved by implementing two entropy measures.Comment: To appear in 2017 IEEE International Conference On Signal and Image
Processing Application
Egocentric Scene Understanding via Multimodal Spatial Rectifier
In this paper, we study a problem of egocentric scene understanding, i.e.,
predicting depths and surface normals from an egocentric image. Egocentric
scene understanding poses unprecedented challenges: (1) due to large head
movements, the images are taken from non-canonical viewpoints (i.e., tilted
images) where existing models of geometry prediction do not apply; (2) dynamic
foreground objects including hands constitute a large proportion of visual
scenes. These challenges limit the performance of the existing models learned
from large indoor datasets, such as ScanNet and NYUv2, which comprise
predominantly upright images of static scenes. We present a multimodal spatial
rectifier that stabilizes the egocentric images to a set of reference
directions, which allows learning a coherent visual representation. Unlike
unimodal spatial rectifier that often produces excessive perspective warp for
egocentric images, the multimodal spatial rectifier learns from multiple
directions that can minimize the impact of the perspective warp. To learn
visual representations of the dynamic foreground objects, we present a new
dataset called EDINA (Egocentric Depth on everyday INdoor Activities) that
comprises more than 500K synchronized RGBD frames and gravity directions.
Equipped with the multimodal spatial rectifier and the EDINA dataset, our
proposed method on single-view depth and surface normal estimation
significantly outperforms the baselines not only on our EDINA dataset, but also
on other popular egocentric datasets, such as First Person Hand Action (FPHA)
and EPIC-KITCHENS.Comment: Appearing in the Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), 202
Scaling Egocentric Vision: The EPIC-KITCHENS Dataset
First-person vision is gaining interest as it offers a unique viewpoint on
people's interaction with objects, their attention, and even intention.
However, progress in this challenging domain has been relatively slow due to
the lack of sufficiently large datasets. In this paper, we introduce
EPIC-KITCHENS, a large-scale egocentric video benchmark recorded by 32
participants in their native kitchen environments. Our videos depict
nonscripted daily activities: we simply asked each participant to start
recording every time they entered their kitchen. Recording took place in 4
cities (in North America and Europe) by participants belonging to 10 different
nationalities, resulting in highly diverse cooking styles. Our dataset features
55 hours of video consisting of 11.5M frames, which we densely labeled for a
total of 39.6K action segments and 454.3K object bounding boxes. Our annotation
is unique in that we had the participants narrate their own videos (after
recording), thus reflecting true intention, and we crowd-sourced ground-truths
based on these. We describe our object, action and anticipation challenges, and
evaluate several baselines over two test splits, seen and unseen kitchens.
Dataset and Project page: http://epic-kitchens.github.ioComment: European Conference on Computer Vision (ECCV) 2018 Dataset and
Project page: http://epic-kitchens.github.i
- …