6,507 research outputs found
What Twitter Profile and Posted Images Reveal About Depression and Anxiety
Previous work has found strong links between the choice of social media
images and users' emotions, demographics and personality traits. In this study,
we examine which attributes of profile and posted images are associated with
depression and anxiety of Twitter users. We used a sample of 28,749 Facebook
users to build a language prediction model of survey-reported depression and
anxiety, and validated it on Twitter on a sample of 887 users who had taken
anxiety and depression surveys. We then applied it to a different set of 4,132
Twitter users to impute language-based depression and anxiety labels, and
extracted interpretable features of posted and profile pictures to uncover the
associations with users' depression and anxiety, controlling for demographics.
For depression, we find that profile pictures suppress positive emotions rather
than display more negative emotions, likely because of social media
self-presentation biases. They also tend to show the single face of the user
(rather than show her in groups of friends), marking increased focus on the
self, emblematic for depression. Posted images are dominated by grayscale and
low aesthetic cohesion across a variety of image features. Profile images of
anxious users are similarly marked by grayscale and low aesthetic cohesion, but
less so than those of depressed users. Finally, we show that image features can
be used to predict depression and anxiety, and that multitask learning that
includes a joint modeling of demographics improves prediction performance.
Overall, we find that the image attributes that mark depression and anxiety
offer a rich lens into these conditions largely congruent with the
psychological literature, and that images on Twitter allow inferences about the
mental health status of users.Comment: ICWSM 201
Beautiful and damned. Combined effect of content quality and social ties on user engagement
User participation in online communities is driven by the intertwinement of
the social network structure with the crowd-generated content that flows along
its links. These aspects are rarely explored jointly and at scale. By looking
at how users generate and access pictures of varying beauty on Flickr, we
investigate how the production of quality impacts the dynamics of online social
systems. We develop a deep learning computer vision model to score images
according to their aesthetic value and we validate its output through
crowdsourcing. By applying it to over 15B Flickr photos, we study for the first
time how image beauty is distributed over a large-scale social system.
Beautiful images are evenly distributed in the network, although only a small
core of people get social recognition for them. To study the impact of exposure
to quality on user engagement, we set up matching experiments aimed at
detecting causality from observational data. Exposure to beauty is
double-edged: following people who produce high-quality content increases one's
probability of uploading better photos; however, an excessive imbalance between
the quality generated by a user and the user's neighbors leads to a decline in
engagement. Our analysis has practical implications for improving link
recommender systems.Comment: 13 pages, 12 figures, final version published in IEEE Transactions on
Knowledge and Data Engineering (Volume: PP, Issue: 99
Aesthetics Assessment of Images Containing Faces
Recent research has widely explored the problem of aesthetics assessment of
images with generic content. However, few approaches have been specifically
designed to predict the aesthetic quality of images containing human faces,
which make up a massive portion of photos in the web. This paper introduces a
method for aesthetic quality assessment of images with faces. We exploit three
different Convolutional Neural Networks to encode information regarding
perceptual quality, global image aesthetics, and facial attributes; then, a
model is trained to combine these features to explicitly predict the aesthetics
of images containing faces. Experimental results show that our approach
outperforms existing methods for both binary, i.e. low/high, and continuous
aesthetic score prediction on four different databases in the state-of-the-art.Comment: Accepted by ICIP 201
Content-based Image Understanding with Applications to Affective Computing and Person Recognition in Natural Settings
Understanding the visual content of images is one of the most important topics in computer vision. Many researchers have tried to teach the machine to see and perceive like human. In this dissertation, we develop several new approaches for image understanding with applications to affective computing, and person detection and recognition. Our proposed method applied to fashion photo analysis can understand the aesthetic quality of photos. Further, a bilinear model that takes into account the relative confidence of region proposals and the mutual relationship between multiple labels is developed to boost multi-label classification. It is evaluated both on object recognition and aesthetic attributes learning. We also develop a person detection and recognition system in natural settings that can robustly handle various pose, viewpoints, and lighting conditions. The system is then put into several real scenarios that has different amount of labelled data. Our algorithm that utilizes unlabelled data reduces the effort needed for data annotation while achieving similar results as with labelled data
The pictures we like are our image: continuous mapping of favorite pictures into self-assessed and attributed personality traits
Flickr allows its users to tag the pictures they like as “favorite”. As a result, many users of the popular photo-sharing platform produce galleries of favorite pictures. This article proposes new approaches, based on Computational Aesthetics, capable to infer the personality traits of Flickr users from the galleries above. In particular, the approaches map low-level features extracted from the pictures into numerical scores corresponding to the Big-Five Traits, both self-assessed and attributed. The experiments were performed over 60,000 pictures tagged as favorite by 300 users (the PsychoFlickr Corpus). The results show that it is possible to predict beyond chance both self-assessed and attributed traits. In line with the state-of-the art of Personality Computing, these latter are predicted with higher effectiveness (correlation up to 0.68 between actual and predicted traits)
Recommended from our members
What the brain 'Likes': neural correlates of providing feedback on social media.
Evidence increasingly suggests that neural structures that respond to primary and secondary rewards are also implicated in the processing of social rewards. The 'Like'-a popular feature on social media-shares features with both monetary and social rewards as a means of feedback that shapes reinforcement learning. Despite the ubiquity of the Like, little is known about the neural correlates of providing this feedback to others. In this study, we mapped the neural correlates of providing Likes to others on social media. Fifty-eight adolescents and young adults completed a task in the MRI scanner designed to mimic the social photo-sharing app Instagram. We examined neural responses when participants provided positive feedback to others. The experience of providing Likes to others on social media related to activation in brain circuity implicated in reward, including the striatum and ventral tegmental area, regions also implicated in the experience of receiving Likes from others. Providing Likes was also associated with activation in brain regions involved in salience processing and executive function. We discuss the implications of these findings for our understanding of the neural processing of social rewards, as well as the neural processes underlying social media use
- …