180 research outputs found

    The Phenomenological Exploration of Animated GIF Use in Computer-Mediated Communication

    Get PDF
    The current study seeks to remedy the lack of scholarly investigation into the use of animated GIFs in computer-mediated communication (CMC). Through phenomenological analysis of in-depth 1-on-1 interviews with individuals engaging in the behavior, one over-arching theme was found with the four underlying sub-themes of: Choice, Meaning, Use and Gratification. Individuals using animated GIFs in their CMC seem to formulate a mental image of an expression they wish to demonstrate and select a GIF that fits a particular context, within a specific conversation, with a specific person. Individuals seem to construct meaning of animated GIFs by reading social cues such as facial expressions and body language presented by the actors in the GIF and combining it with the context of the conversation and the person or persons they are communicating with. Individuals seem to use animated GIFs to actively compensate for the lack of social cue transmission in CMC, and seem do so for the purpose of humor, clarification of message, and to increase saliency. Lastly, this whole process seems to be lubricated by a feedback loop of gratification where in individuals feel their communication is improved and more enjoyable than with just words. The current findings are relevant to theories of communication as well as to online education. Recommendations for future research into their effectiveness for educational purposes are provided

    Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction

    Get PDF
    Visual media are powerful means of expressing emotions and sentiments. The constant generation of new content in social networks highlights the need of automated visual sentiment analysis tools. While Convolutional Neural Networks (CNNs) have established a new state-of-the-art in several vision problems, their application to the task of sentiment analysis is mostly unexplored and there are few studies regarding how to design CNNs for this purpose. In this work, we study the suitability of fine-tuning a CNN for visual sentiment prediction as well as explore performance boosting techniques within this deep learning setting. Finally, we provide a deep-dive analysis into a benchmark, state-of-the-art network architecture to gain insight about how to design patterns for CNNs on the task of visual sentiment prediction.Comment: Preprint of the paper accepted at the 1st Workshop on Affect and Sentiment in Multimedia (ASM), in ACM MultiMedia 2015. Brisbane, Australi

    Automatic Understanding of Image and Video Advertisements

    Full text link
    There is more to images than their objective physical content: for example, advertisements are created to persuade a viewer to take a certain action. We propose the novel problem of automatic advertisement understanding. To enable research on this problem, we create two datasets: an image dataset of 64,832 image ads, and a video dataset of 3,477 ads. Our data contains rich annotations encompassing the topic and sentiment of the ads, questions and answers describing what actions the viewer is prompted to take and the reasoning that the ad presents to persuade the viewer ("What should I do according to this ad, and why should I do it?"), and symbolic references ads make (e.g. a dove symbolizes peace). We also analyze the most common persuasive strategies ads use, and the capabilities that computer vision systems should have to understand these strategies. We present baseline classification results for several prediction tasks, including automatically answering questions about the messages of the ads.Comment: To appear in CVPR 2017; data available on http://cs.pitt.edu/~kovashka/ad

    Understanding the Emotional Impact of GIFs on Instagram through Consumer Neuroscience

    Get PDF
    The ability of GIFs to generate emotionality in social media marketing strategies is analyzed. The aim of this work is to show how neuroscience research techniques can be integrated into the analysis of emotions, improving the results and helping to guide actions in social networks. This research is structured in two phases: an experimental study using automated biometric analysis (facial coding, GSR and eye tracking) and an analysis of declared feelings in the comments of Instagram users. Explicit valence, type of emotion, length of comment and proportion of emojis are extracted. The results indicate that the explicit measure of emotional valence shows a higher and more positive emotional level than the implicit one. This difference is influenced differently by the engagement and the proportion of emojis in the comment. A further step has been taken in the measurement of user emotionality in social media campaigns, including not only content analysis, but also providing new insights thanks to neuromarketin

    Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology

    Get PDF
    Every culture and language is unique. Our work expressly focuses on the uniqueness of culture and language in relation to human affect, specifically sentiment and emotion semantics, and how they manifest in social multimedia. We develop sets of sentiment- and emotion-polarized visual concepts by adapting semantic structures called adjective-noun pairs, originally introduced by Borth et al. (2013), but in a multilingual context. We propose a new language-dependent method for automatic discovery of these adjective-noun constructs. We show how this pipeline can be applied on a social multimedia platform for the creation of a large-scale multilingual visual sentiment concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our unified ontology is organized hierarchically by multilingual clusters of visually detectable nouns and subclusters of emotionally biased versions of these nouns. In addition, we present an image-based prediction task to show how generalizable language-specific models are in a multilingual context. A new, publicly available dataset of >15.6K sentiment-biased visual concepts across 12 languages with language-specific detector banks, >7.36M images and their metadata is also released.Comment: 11 pages, to appear at ACM MM'1
    • …
    corecore