4,652 research outputs found
Co-interest Person Detection from Multiple Wearable Camera Videos
Wearable cameras, such as Google Glass and Go Pro, enable video data
collection over larger areas and from different views. In this paper, we tackle
a new problem of locating the co-interest person (CIP), i.e., the one who draws
attention from most camera wearers, from temporally synchronized videos taken
by multiple wearable cameras. Our basic idea is to exploit the motion patterns
of people and use them to correlate the persons across different videos,
instead of performing appearance-based matching as in traditional video
co-segmentation/localization. This way, we can identify CIP even if a group of
people with similar appearance are present in the view. More specifically, we
detect a set of persons on each frame as the candidates of the CIP and then
build a Conditional Random Field (CRF) model to select the one with consistent
motion patterns in different videos and high spacial-temporal consistency in
each video. We collect three sets of wearable-camera videos for testing the
proposed algorithm. All the involved people have similar appearances in the
collected videos and the experiments demonstrate the effectiveness of the
proposed algorithm.Comment: ICCV 201
Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition
We present a unified framework for understanding human social behaviors in
raw image sequences. Our model jointly detects multiple individuals, infers
their social actions, and estimates the collective actions with a single
feed-forward pass through a neural network. We propose a single architecture
that does not rely on external detection algorithms but rather is trained
end-to-end to generate dense proposal maps that are refined via a novel
inference scheme. The temporal consistency is handled via a person-level
matching Recurrent Neural Network. The complete model takes as input a sequence
of frames and outputs detections along with the estimates of individual actions
and collective activities. We demonstrate state-of-the-art performance of our
algorithm on multiple publicly available benchmarks
Generic Tubelet Proposals for Action Localization
We develop a novel framework for action localization in videos. We propose
the Tube Proposal Network (TPN), which can generate generic, class-independent,
video-level tubelet proposals in videos. The generated tubelet proposals can be
utilized in various video analysis tasks, including recognizing and localizing
actions in videos. In particular, we integrate these generic tubelet proposals
into a unified temporal deep network for action classification. Compared with
other methods, our generic tubelet proposal method is accurate, general, and is
fully differentiable under a smoothL1 loss function. We demonstrate the
performance of our algorithm on the standard UCF-Sports, J-HMDB21, and UCF-101
datasets. Our class-independent TPN outperforms other tubelet generation
methods, and our unified temporal deep network achieves state-of-the-art
localization results on all three datasets
- …