17 research outputs found
Person Recognition in Personal Photo Collections
Recognising persons in everyday photos presents major challenges (occluded
faces, different clothing, locations, etc.) for machine vision. We propose a
convnet based person recognition system on which we provide an in-depth
analysis of informativeness of different body cues, impact of training data,
and the common failure modes of the system. In addition, we discuss the
limitations of existing benchmarks and propose more challenging ones. Our
method is simple and is built on open source and open data, yet it improves the
state of the art results on a large dataset of social media photos (PIPA).Comment: Accepted to ICCV 2015, revise
Social relation recognition in egocentric photostreams
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera (2fpm), by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental’s social theory, that groups human relations into five social domains with related categories. Our method is a new deep learning architecture that exploits the hierarchical structure of the label space and relies on a set of social attributes estimated at frame level to provide a semantic representation of social interactions. Experimental results on the new EgoSocialRelation dataset demonstrate the effectiveness of our proposal.Peer ReviewedPostprint (author's final draft
Social Relation Recognition in Egocentric Photostreams
This paper proposes an approach to automatically categorize the social
interactions of a user wearing a photo-camera 2fpm, by relying solely on what
the camera is seeing. The problem is challenging due to the overwhelming
complexity of social life and the extreme intra-class variability of social
interactions captured under unconstrained conditions. We adopt the
formalization proposed in Bugental's social theory, that groups human relations
into five social domains with related categories. Our method is a new deep
learning architecture that exploits the hierarchical structure of the label
space and relies on a set of social attributes estimated at frame level to
provide a semantic representation of social interactions. Experimental results
on the new EgoSocialRelation dataset demonstrate the effectiveness of our
proposal.Comment: Accepted at ICIP 201