1,936 research outputs found

    Learning Representations of Social Media Users

    Get PDF
    User representations are routinely used in recommendation systems by platform developers, targeted advertisements by marketers, and by public policy researchers to gauge public opinion across demographic groups. Computer scientists consider the problem of inferring user representations more abstractly; how does one extract a stable user representation - effective for many downstream tasks - from a medium as noisy and complicated as social media? The quality of a user representation is ultimately task-dependent (e.g. does it improve classifier performance, make more accurate recommendations in a recommendation system) but there are proxies that are less sensitive to the specific task. Is the representation predictive of latent properties such as a person's demographic features, socioeconomic class, or mental health state? Is it predictive of the user's future behavior? In this thesis, we begin by showing how user representations can be learned from multiple types of user behavior on social media. We apply several extensions of generalized canonical correlation analysis to learn these representations and evaluate them at three tasks: predicting future hashtag mentions, friending behavior, and demographic features. We then show how user features can be employed as distant supervision to improve topic model fit. Finally, we show how user features can be integrated into and improve existing classifiers in the multitask learning framework. We treat user representations - ground truth gender and mental health features - as auxiliary tasks to improve mental health state prediction. We also use distributed user representations learned in the first chapter to improve tweet-level stance classifiers, showing that distant user information can inform classification tasks at the granularity of a single message.Comment: PhD thesi

    Learning Representations of Social Media Users

    Get PDF
    User representations are routinely used in recommendation systems by platform developers, targeted advertisements by marketers, and by public policy researchers to gauge public opinion across demographic groups. Computer scientists consider the problem of inferring user representations more abstractly; how does one extract a stable user representation - effective for many downstream tasks - from a medium as noisy and complicated as social media? The quality of a user representation is ultimately task-dependent (e.g. does it improve classifier performance, make more accurate recommendations in a recommendation system) but there are proxies that are less sensitive to the specific task. Is the representation predictive of latent properties such as a person's demographic features, socioeconomic class, or mental health state? Is it predictive of the user's future behavior? In this thesis, we begin by showing how user representations can be learned from multiple types of user behavior on social media. We apply several extensions of generalized canonical correlation analysis to learn these representations and evaluate them at three tasks: predicting future hashtag mentions, friending behavior, and demographic features. We then show how user features can be employed as distant supervision to improve topic model fit. Finally, we show how user features can be integrated into and improve existing classifiers in the multitask learning framework. We treat user representations - ground truth gender and mental health features - as auxiliary tasks to improve mental health state prediction. We also use distributed user representations learned in the first chapter to improve tweet-level stance classifiers, showing that distant user information can inform classification tasks at the granularity of a single message.Comment: PhD thesi

    360-MAM-Affect: Sentiment Analysis with the Google Prediction API and EmoSenticNet

    Get PDF
    Online recommender systems are useful for media asset management where they select the best content from a set of media assets. We have developed an architecture for 360-MAM- Select, a recommender system for educational video content. 360-MAM-Select will utilise sentiment analysis and gamification techniques for the recommendation of media assets. 360-MAM-Select will increase user participation with digital content through improved video recommendations. Here, we discuss the architecture of 360-MAM-Select and the use of the Google Prediction API and EmoSenticNet for 360-MAM-Affect, 360-MAM-Select's sentiment analysis module. Results from testing two models for sentiment analysis, Sentiment Classifier (Google Prediction API) and EmoSenticNetClassifer (Google Prediction API + EmoSenticNet) are promising. Future work includes the implementation and testing of 360-MAM-Select on video data from YouTube EDU and Head Squeeze

    Image Understanding by Socializing the Semantic Gap

    Get PDF
    Several technological developments like the Internet, mobile devices and Social Networks have spurred the sharing of images in unprecedented volumes, making tagging and commenting a common habit. Despite the recent progress in image analysis, the problem of Semantic Gap still hinders machines in fully understand the rich semantic of a shared photo. In this book, we tackle this problem by exploiting social network contributions. A comprehensive treatise of three linked problems on image annotation is presented, with a novel experimental protocol used to test eleven state-of-the-art methods. Three novel approaches to annotate, under stand the sentiment and predict the popularity of an image are presented. We conclude with the many challenges and opportunities ahead for the multimedia community

    Emotion Quantification Using Variational Quantum State Fidelity Estimation

    Get PDF
    Sentiment analysis has been instrumental in developing artificial intelligence when applied to various domains. However, most sentiments and emotions are temporal and often exist in a complex manner. Several emotions can be experienced at the same time. Instead of recognizing only categorical information about emotions, there is a need to understand and quantify the intensity of emotions. The proposed research intends to investigate a quantum-inspired approach for quantifying emotional intensities in runtime. The inspiration comes from manifesting human cognition and decision-making capabilities, which may adopt a brief explanation through quantum theory. Quantum state fidelity was used to characterize states and estimate emotion intensities rendered by subjects from the Amsterdam Dynamic Facial Expression Set (ADFES) dataset. The Quantum variational classifier technique was used to perform this experiment on the IBM Quantum Experience platform. The proposed method successfully quantifies the intensities of joy, sadness, contempt, anger, surprise, and fear emotions of labelled subjects from the ADFES dataset
    • …
    corecore