111 research outputs found

    A Survey on Classification of Photo Aesthetics Based on Emotion

    Get PDF
    Recognition of human facial expression and calculating exact emotion by computer vision is an interesting and challenging problem. Emotion in natural scenery images plays vital role in the way humans perceive an image. Based on the various emotions like happiness, sadness, fear, anger of any human being the images that are examined by that person can propose that if the person is in happy mood then he/she would C the same images in different ways but still can be possible to build a universal classification for various emotions. The paper proposes the various techniques of recognizing emotion on the basis of how humans perceive an image, also aims to classify the aesthetics of the photographic images and determine wallpaper (Scene or non-scene images) according to human emotions

    Social Media Advertisement Outreach: Learning the Role of Aesthetics

    Full text link
    Corporations spend millions of dollars on developing creative image-based promotional content to advertise to their user-base on platforms like Twitter. Our paper is an initial study, where we propose a novel method to evaluate and improve outreach of promotional images from corporations on Twitter, based purely on their describable aesthetic attributes. Existing works in aesthetic based image analysis exclusively focus on the attributes of digital photographs, and are not applicable to advertisements due to the influences of inherent content and context based biases on outreach. Our paper identifies broad categories of biases affecting such images, describes a method for normalization to eliminate effects of those biases and score images based on their outreach, and examines the effects of certain handcrafted describable aesthetic features on image outreach. Optimizing on the describable aesthetic features resulting from this research is a simple method for corporations to complement their existing marketing strategy to gain significant improvement in user engagement on social media for promotional images.Comment: Accepted to SIGIR 201

    6 Seconds of Sound and Vision: Creativity in Micro-Videos

    Full text link
    The notion of creativity, as opposed to related concepts such as beauty or interestingness, has not been studied from the perspective of automatic analysis of multimedia content. Meanwhile, short online videos shared on social media platforms, or micro-videos, have arisen as a new medium for creative expression. In this paper we study creative micro-videos in an effort to understand the features that make a video creative, and to address the problem of automatic detection of creative content. Defining creative videos as those that are novel and have aesthetic value, we conduct a crowdsourcing experiment to create a dataset of over 3,800 micro-videos labelled as creative and non-creative. We propose a set of computational features that we map to the components of our definition of creativity, and conduct an analysis to determine which of these features correlate most with creative video. Finally, we evaluate a supervised approach to automatically detect creative video, with promising results, showing that it is necessary to model both aesthetic value and novelty to achieve optimal classification accuracy.Comment: 8 pages, 1 figures, conference IEEE CVPR 201

    Collecting, Analyzing and Predicting Socially-Driven Image Interestingness

    Get PDF
    International audienceInterestingness has recently become an emerging concept for visual content assessment. However, understanding and predicting image interestingness remains challenging as its judgment is highly subjective and usually context-dependent. In addition, existing datasets are quite small for in-depth analysis. To push forward research in this topic, a large-scale interestingness dataset (images and their associated metadata) is described in this paper and released for public use. We then propose computational models based on deep learning to predict image interestingness. We show that exploiting relevant contextual information derived from social metadata could greatly improve the prediction results. Finally we discuss some key findings and potential research directions for this emerging topic

    Automatic prediction of text aesthetics and interestingness

    Get PDF
    This paper investigates the problem of automated text aesthetics prediction. The availability of user generated content and ratings, e.g. Flickr, has induced research in aesthetics prediction for non-text domains, particularly for photographic images. This problem, however, has yet not been explored for the text domain. Due to the very subjective nature of text aesthetics, it is dicult to compile human annotated data by methods such as crowd sourcing with a fair degree of inter-annotator agreement. The availability of the Kindle \popular highlights" data has motivated us to compile a dataset comprised of human annotated aesthetically pleasing and interesting text passages. We then undertake a supervised classication approach to predict text aesthetics by constructing real-valued feature vectors from each text passage. In particular, the features that we use for this classification task are word length, repetitions, polarity, part-of-speech, semantic distances; and topic generality and diversity. A traditional binary classication approach is not effective in this case because non-highlighted passages surrounding the highlighted ones do not necessarily represent the other extreme of unpleasant quality text. Due to the absence of real negative class samples, we employ the MC algorithm, in which training can be initiated with instances only from the positive class. On each successive iteration the algorithm selects new strong negative samples from the unlabeled class and retrains itself. The results show that the mapping convergence (MC) algorithm with a Gaussian and a linear kernel used for the mapping and convergence phases, respectively, yields the best results, achieving satisfactory accuracy, precision and recall values of about 74%, 42% and 54% respectively
    corecore