9 research outputs found

    What Twitter Profile and Posted Images Reveal About Depression and Anxiety

    Full text link
    Previous work has found strong links between the choice of social media images and users' emotions, demographics and personality traits. In this study, we examine which attributes of profile and posted images are associated with depression and anxiety of Twitter users. We used a sample of 28,749 Facebook users to build a language prediction model of survey-reported depression and anxiety, and validated it on Twitter on a sample of 887 users who had taken anxiety and depression surveys. We then applied it to a different set of 4,132 Twitter users to impute language-based depression and anxiety labels, and extracted interpretable features of posted and profile pictures to uncover the associations with users' depression and anxiety, controlling for demographics. For depression, we find that profile pictures suppress positive emotions rather than display more negative emotions, likely because of social media self-presentation biases. They also tend to show the single face of the user (rather than show her in groups of friends), marking increased focus on the self, emblematic for depression. Posted images are dominated by grayscale and low aesthetic cohesion across a variety of image features. Profile images of anxious users are similarly marked by grayscale and low aesthetic cohesion, but less so than those of depressed users. Finally, we show that image features can be used to predict depression and anxiety, and that multitask learning that includes a joint modeling of demographics improves prediction performance. Overall, we find that the image attributes that mark depression and anxiety offer a rich lens into these conditions largely congruent with the psychological literature, and that images on Twitter allow inferences about the mental health status of users.Comment: ICWSM 201

    Profiling with Big Data: Identifying Privacy Implication for Individuals, Groups and Society

    Get PDF
    User profiling using big data raises critical issues regarding personal data and privacy. Until recently, privacy studies were focused on the control of personal data; due to big data analysis, however, new privacy issues have emerged with unidentified implications. This paper identifies and investigates privacy threats that stem from data-driven profiling using a multi-level approach: individual, group and society, to analyze the privacy implications stemming from the generation of new knowledge used for automated predictions and decisions. We also argue that mechanisms are required to protect the privacy interests of groups as entities, independently of the interests of their individual members. Finally, this paper discusses privacy threats resulting from the cumulative effect of big data profiling

    The Emerging Trends of Multi-Label Learning

    Full text link
    Exabytes of data are generated daily by humans, leading to the growing need for new efforts in dealing with the grand challenges for multi-label learning brought by big data. For example, extreme multi-label classification is an active and rapidly growing research area that deals with classification tasks with an extremely large number of classes or labels; utilizing massive data with limited supervision to build a multi-label classification model becomes valuable for practical applications, etc. Besides these, there are tremendous efforts on how to harvest the strong learning capability of deep learning to better capture the label dependencies in multi-label learning, which is the key for deep learning to address real-world classification tasks. However, it is noted that there has been a lack of systemic studies that focus explicitly on analyzing the emerging trends and new challenges of multi-label learning in the era of big data. It is imperative to call for a comprehensive survey to fulfill this mission and delineate future research directions and new applications.Comment: Accepted to TPAMI 202

    Lidské vnímání v situaci nejistoty: Vizuální, auditivní a vtělené reakce na nejednoznačné stimuly.

    Get PDF
    Naše smysly se vyvinuly tak, abychom z okolního prostředí získávat optimální množství informací. Tato optimalizace ovšem znamená, že je třeba počítat s chybami. Proto, abychom předešli těm s významným dopadem, vyvinula se u člověka tendence k nadhodnocování významu vzájemných souvislostí (i ve smyslu vnímání vzorů a posloupností). Ve své práci jsem testovala schopnost vyhodnocování vizuálních a akustických stimulů. Za použití počítačové grafiky byl vyvinut soubor testovacích stimulů, kde bylo rozložení prvků určeno sofistikovaným generátorem pseudo-náhodných čísel. Tyto výsledné masky s různou mírou průhlednosti byly užity k překrytí geometrických tvarů. Podobného postupu bylo užito k vytvoření černobílých stimulů s vysokým kontrastem. Za použití metod bayesovské statistiky jsem nalezla vzájemnou provázanost schopnosti určit přítomnost vzoru (a její absenci) a stylu myšlení, specificky racionálního a na intuici založeného. Dále jsem pak použila nejednoznačné výrazy tváře a vokalizace vysoce intenzivních afektivních stavů (bolest a slast) a stavů nízké intenzity (neutrální výraz/promluva, úsměv/smích). Výsledkem je zjištění, že vysoká intenzita projevu je spojena s nízkou schopností respondentů správně vyhodnotit valenci vizuálních i akustických stimulů. Díky použitému statistickému přístupu jsem...In order to orient ourselves in the environment our senses have evolved so as to acquire optimal information. The optimization, however, incurs mistakes. To avoid costly ones, the over-perception of patterns (in humans) augments the decision making. I tested the decision- making in two modalities, acoustic and visual. A set of stimuli (using computer-generated graphics, based on output from a very good pseudo random generator) was produced: masks with a random pattern with varying degree of transparency over geometrical figures were used, followed by similar task that involved black and white high-contrast patterns. In both cases, I was able to find, using a Bayesian statistical approach, that the ability to detect the correct pattern presence (or lack thereof) was related to respondents' thinking styles, specifically Rationality and Intuition. Furthermore, I used ambiguous facial expressions, and accompanying vocalizations, of high-intensity affects (pain, pleasure and fear) and low- intensity (neutral and smile/laughter). My findings evidenced that the high-intensity facial expressions and vocalizations were rated with a low probability of correct response. Differences in the consistency of the ratings were detected and also the range of probabilities of being due to chance (guessing). When...Katedra filosofie a dějin přírodních vědDepartment of Philosophy and History of SciencePřírodovědecká fakultaFaculty of Scienc

    User Profiling through Deep Multimodal Fusion

    No full text
    © 2018 Association for Computing Machinery. User profiling in social media has gained a lot of attention due to its varied set of applications in advertising, marketing, recruiting, and law enforcement. Among the various techniques for user modeling, there is fairly limited work on how to merge multiple sources or modalities of user data - such as text, images, and relations - to arrive at more accurate user profiles. In this paper, we propose a deep learning approach that extracts and fuses information across different modalities. Our hybrid user profiling framework utilizes a shared representation between modalities to integrate three sources of data at the feature level, and combines the decision of separate networks that operate on each combination of data sources at the decision level. Our experimental results on more than 5K Facebook users demonstrate that our approach outperforms competing approaches for inferring age, gender and personality traits of social media users. We get highly accurate results with AUC values of more than 0.9 for the task of age prediction and 0.95 for the task of gender prediction.status: publishe

    User profiling through deep multimodal fusion

    No full text
    User profiling in social media has gained a lot of attention due to its varied set of applications in advertising, marketing, recruiting, and law enforcement. Among the various techniques for user modeling, there is fairly limited work on how to merge multiple sources or modalities of user data - such as text, images, and relations - to arrive at more accurate user profiles. In this paper, we propose a deep learning approach that extracts and fuses information across different modalities. Our hybrid user profiling framework utilizes a shared representation between modalities to integrate three sources of data at the feature level, and combines the decision of separate networks that operate on each combination of data sources at the decision level. Our experimental results on more than 5K Facebook users demonstrate that our approach outperforms competing approaches for inferring age, gender and personality traits of social media users. We get highly accurate results with AUC values of more than 0.9 for the task of age prediction and 0.95 for the task of gender prediction
    corecore