314 research outputs found

    Image or Text: Which One is More Influential? A Deep-learning Approach for Visual and Textual Data Analysis in the Digital Economy

    Get PDF
    In a digital economy, different types of information about products communicate their quality and characteristics to prospective consumers. However, it remains unclear which type of information plays the most important role in individuals’ decision-making processes. In this study, we explore the effect that unstructured data has on and the importance of congruence between textual and visual data in consumers’ purchase decisions. We apply a deep neural network model to rank the importance of different information types and use a regression model to investigate the impact that information consistency has on sales predictions. Based on our empirical analysis, we found that both image-based and text-based information influenced consumers’ purchase decisions but that the former influenced their purchase decisions about “search goods” more and that the latter influenced their purchase decisions about “experience goods” more. Furthermore, congruence between image- and text-based information was positively associated with purchase decisions, which indicates that information congruence impacts products’ sales performance in the digital economy. In this study, we also demonstrate how to apply advanced deep-learning techniques to measure the congruence between different information types

    What your Facebook Profile Picture Reveals about your Personality

    Get PDF
    People spend considerable effort managing the impressions they give others. Social psychologists have shown that people manage these impressions differently depending upon their personality. Facebook and other social media provide a new forum for this fundamental process; hence, understanding people's behaviour on social media could provide interesting insights on their personality. In this paper we investigate automatic personality recognition from Facebook profile pictures. We analyze the effectiveness of four families of visual features and we discuss some human interpretable patterns that explain the personality traits of the individuals. For example, extroverts and agreeable individuals tend to have warm colored pictures and to exhibit many faces in their portraits, mirroring their inclination to socialize; while neurotic ones have a prevalence of pictures of indoor places. Then, we propose a classification approach to automatically recognize personality traits from these visual features. Finally, we compare the performance of our classification approach to the one obtained by human raters and we show that computer-based classifications are significantly more accurate than averaged human-based classifications for Extraversion and Neuroticism

    DEEP CONVOLUTIONAL NEURAL NETWORKS FOR SENTIMENT ANALYSIS OF CULTURAL HERITAGE

    Get PDF
    Abstract. The promotion of Cultural Heritage (CH) goods has become a major challenges over the last years. CH goods promote economic development, notably through cultural and creative industries and tourism. Thus, an effective planning of archaeological, cultural, artistic and architectural sites within the territory make CH goods easily accessible. A way of adding value to these services is making them capable of providing, using new technologies, a more immersive and stimulating fruition of information. In this light, an effective contribution can be provided by sentiment analysis. The sentiment related to a monument can be used for its evaluation considering that if it is positive, it influences its public image by increasing its value. This work introduces an approach to estimate the sentiment of Social Media pictures CH related. The sentiment of a picture is identified by an especially trained Deep Convolutional Neural Network (DCNN); aftewards, we compared the performance of three DCNNs: VGG16, ResNet and InceptionResNet. It is interesting to observe how these three different architectures are able to correctly evaluate the sentiment of an image referred to a ancient monument, historical buildings, archaeological sites, museum objects, and more. Our approach has been applied to a newly collected dataset of pictures from Instagram, which shows CH goods included in the UNESCO list of World Heritage properties.</p

    Opinion mining and sentiment analysis in marketing communications: a science mapping analysis in Web of Science (1998–2018)

    Get PDF
    Opinion mining and sentiment analysis has become ubiquitous in our society, with applications in online searching, computer vision, image understanding, artificial intelligence and marketing communications (MarCom). Within this context, opinion mining and sentiment analysis in marketing communications (OMSAMC) has a strong role in the development of the field by allowing us to understand whether people are satisfied or dissatisfied with our service or product in order to subsequently analyze the strengths and weaknesses of those consumer experiences. To the best of our knowledge, there is no science mapping analysis covering the research about opinion mining and sentiment analysis in the MarCom ecosystem. In this study, we perform a science mapping analysis on the OMSAMC research, in order to provide an overview of the scientific work during the last two decades in this interdisciplinary area and to show trends that could be the basis for future developments in the field. This study was carried out using VOSviewer, CitNetExplorer and InCites based on results from Web of Science (WoS). The results of this analysis show the evolution of the field, by highlighting the most notable authors, institutions, keywords, publications, countries, categories and journals.The research was funded by Programa Operativo FEDER Andalucía 2014‐2020, grant number “La reputación de las organizaciones en una sociedad digital. Elaboración de una Plataforma Inteligente para la Localización, Identificación y Clasificación de Influenciadores en los Medios Sociales Digitales (UMA18‐ FEDERJA‐148)” and The APC was funded by the same research gran

    Visual analytics and artificial intelligence for marketing

    Get PDF
    In today’s online environments, such as social media platforms and e-commerce websites, consumers are overloaded with information and firms are competing for their attention. Most of the data on these platforms comes in the form of text, images, or other unstructured data sources. It is important to understand which information on company websites and social media platforms are enticing and/or likeable by consumers. The impact of online visual content, in particular, remains largely unknown. Finding the drivers behind likes and clicks can help (1) understand how consumers interact with the information that is presented to them and (2) leverage this knowledge to improve marketing content. The main goal of this dissertation is to learn more about why consumers like and click on visual content online. To reach this goal visual analytics are used for automatic extraction of relevant information from visual content. This information can then be related, at scale, to consumer and their decisions

    Understanding, Categorizing and Predicting Semantic Image-Text Relations

    Full text link
    Two modalities are often used to convey information in a complementary and beneficial manner, e.g., in online news, videos, educational resources, or scientific publications. The automatic understanding of semantic correlations between text and associated images as well as their interplay has a great potential for enhanced multimodal web search and recommender systems. However, automatic understanding of multimodal information is still an unsolved research problem. Recent approaches such as image captioning focus on precisely describing visual content and translating it to text, but typically address neither semantic interpretations nor the specific role or purpose of an image-text constellation. In this paper, we go beyond previous work and investigate, inspired by research in visual communication, useful semantic image-text relations for multimodal information retrieval. We derive a categorization of eight semantic image-text classes (e.g., "illustration" or "anchorage") and show how they can systematically be characterized by a set of three metrics: cross-modal mutual information, semantic correlation, and the status relation of image and text. Furthermore, we present a deep learning system to predict these classes by utilizing multimodal embeddings. To obtain a sufficiently large amount of training data, we have automatically collected and augmented data from a variety of data sets and web resources, which enables future research on this topic. Experimental results on a demanding test set demonstrate the feasibility of the approach.Comment: 8 pages, 8 Figures, 5 table
    corecore