1 research outputs found
Inferring User Gender from User Generated Visual Content on a Deep Semantic Space
In this paper we address the task of gender classification on picture sharing
social media networks such as Instagram and Flickr. We aim to infer the gender
of an user given only a small set of the images shared in its profile. We make
the assumption that user's images contain a collection of visual elements that
implicitly encode discriminative patterns that allow inferring its gender, in a
language independent way. This information can then be used in personalisation
and recommendation. Our main hypothesis is that semantic visual features are
more adequate for discriminating high-level classes.
The gender detection task is formalised as: given an user's profile,
represented as a bag of images, we want to infer the gender of the user. Social
media profiles can be noisy and contain confounding factors, therefore we
classify bags of user-profile's images to provide a more robust prediction.
Experiments using a dataset from the picture sharing social network Instagram
show that the use of multiple images is key to improve detection performance.
Moreover, we verify that deep semantic features are more suited for gender
detection than low-level image representations. The methods proposed can infer
the gender with precision scores higher than 0.825, and the best performing
method achieving 0.911 precision.Comment: To appear in EUSIPCO 201