Article thumbnail

Semi-Supervised First-Person Activity Recognition in Body-Worn Video

By Honglin Chen, Hao Li, Alexander Song, Matt Haberland, Osman Akar, Adam Dhillon, Tiankuang Zhou, Andrea L Bertozzi and P Jeffrey Brantingham

Abstract

Body-worn cameras are now commonly used for logging daily life, sports, and law enforcement activities, creating a large volume of archived footage. This paper studies the problem of classifying frames of footage according to the activity of the camera-wearer with an emphasis on application to real-world police body-worn video. Real-world datasets pose a different set of challenges from existing egocentric vision datasets: the amount of footage of different activities is unbalanced, the data contains personally identifiable information, and in practice it is difficult to provide substantial training footage for a supervised approach. We address these challenges by extracting features based exclusively on motion information then segmenting the video footage using a semi-supervised classification algorithm. On publicly available datasets, our method achieves results comparable to, if not better than, supervised and/or deep learning methods using a fraction of the training data. It also shows promising results on real-world police body-worn video

Topics: eess.IV, stat.ML
Publisher: eScholarship, University of California
Year: 2019
OAI identifier: oai:escholarship.org/ark:/13030/qt548494p5

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.