24,591 research outputs found
Opinion mining and sentiment analysis in marketing communications: a science mapping analysis in Web of Science (1998–2018)
Opinion mining and sentiment analysis has become ubiquitous in our society, with
applications in online searching, computer vision, image understanding, artificial intelligence and
marketing communications (MarCom). Within this context, opinion mining and sentiment analysis
in marketing communications (OMSAMC) has a strong role in the development of the field by
allowing us to understand whether people are satisfied or dissatisfied with our service or product
in order to subsequently analyze the strengths and weaknesses of those consumer experiences. To
the best of our knowledge, there is no science mapping analysis covering the research about opinion
mining and sentiment analysis in the MarCom ecosystem. In this study, we perform a science
mapping analysis on the OMSAMC research, in order to provide an overview of the scientific work
during the last two decades in this interdisciplinary area and to show trends that could be the basis
for future developments in the field. This study was carried out using VOSviewer, CitNetExplorer
and InCites based on results from Web of Science (WoS). The results of this analysis show the
evolution of the field, by highlighting the most notable authors, institutions, keywords,
publications, countries, categories and journals.The research was funded by Programa Operativo FEDER Andalucía 2014‐2020, grant number “La
reputación de las organizaciones en una sociedad digital. Elaboración de una Plataforma Inteligente para la
Localización, Identificación y Clasificación de Influenciadores en los Medios Sociales Digitales (UMA18‐
FEDERJA‐148)” and The APC was funded by the same research gran
Are black friday deals worth it? Mining twitter users' sentiment and behavior response
The Black Friday event has become a global opportunity for marketing and companies’
strategies aimed at increasing sales. The present study aims to understand consumer behavior
through the analysis of user-generated content (UGC) on social media with respect to the Black Friday
2018 offers published by the 23 largest technology companies in Spain. To this end, we analyzed
Twitter-based UGC about companies’ offers using a three-step data text mining process. First, a Latent
Dirichlet Allocation Model (LDA) was used to divide the sample into topics related to Black Friday.
In the next step, sentiment analysis (SA) using Python was carried out to determine the feelings
towards the identified topics and offers published by the companies on Twitter. Thirdly and finally,
a data-text mining process called textual analysis (TA) was performed to identify insights that could
help companies to improve their promotion and marketing strategies as well as to better understand
the customer behavior on social media. The results show that consumers had positive perceptions of
such topics as exclusive promotions (EP) and smartphones (SM); by contrast, topics such as fraud (FA),
insults and noise (IN), and customer support (CS) were negatively perceived by customers. Based on
these results, we offer guidelines to practitioners to improve their social media communication.
Our results also have theoretical implications that can promote further research in this area
Sentiment analysis of health care tweets: review of the methods used.
BACKGROUND: Twitter is a microblogging service where users can send and read short 140-character messages called "tweets." There are several unstructured, free-text tweets relating to health care being shared on Twitter, which is becoming a popular area for health care research. Sentiment is a metric commonly used to investigate the positive or negative opinion within these messages. Exploring the methods used for sentiment analysis in Twitter health care research may allow us to better understand the options available for future research in this growing field. OBJECTIVE: The first objective of this study was to understand which tools would be available for sentiment analysis of Twitter health care research, by reviewing existing studies in this area and the methods they used. The second objective was to determine which method would work best in the health care settings, by analyzing how the methods were used to answer specific health care questions, their production, and how their accuracy was analyzed. METHODS: A review of the literature was conducted pertaining to Twitter and health care research, which used a quantitative method of sentiment analysis for the free-text messages (tweets). The study compared the types of tools used in each case and examined methods for tool production, tool training, and analysis of accuracy. RESULTS: A total of 12 papers studying the quantitative measurement of sentiment in the health care setting were found. More than half of these studies produced tools specifically for their research, 4 used open source tools available freely, and 2 used commercially available software. Moreover, 4 out of the 12 tools were trained using a smaller sample of the study's final data. The sentiment method was trained against, on an average, 0.45% (2816/627,024) of the total sample data. One of the 12 papers commented on the analysis of accuracy of the tool used. CONCLUSIONS: Multiple methods are used for sentiment analysis of tweets in the health care setting. These range from self-produced basic categorizations to more complex and expensive commercial software. The open source and commercial methods are developed on product reviews and generic social media messages. None of these methods have been extensively tested against a corpus of health care messages to check their accuracy. This study suggests that there is a need for an accurate and tested tool for sentiment analysis of tweets trained using a health care setting-specific corpus of manually annotated tweets first
A study on text-score disagreement in online reviews
In this paper, we focus on online reviews and employ artificial intelligence
tools, taken from the cognitive computing field, to help understanding the
relationships between the textual part of the review and the assigned numerical
score. We move from the intuitions that 1) a set of textual reviews expressing
different sentiments may feature the same score (and vice-versa); and 2)
detecting and analyzing the mismatches between the review content and the
actual score may benefit both service providers and consumers, by highlighting
specific factors of satisfaction (and dissatisfaction) in texts.
To prove the intuitions, we adopt sentiment analysis techniques and we
concentrate on hotel reviews, to find polarity mismatches therein. In
particular, we first train a text classifier with a set of annotated hotel
reviews, taken from the Booking website. Then, we analyze a large dataset, with
around 160k hotel reviews collected from Tripadvisor, with the aim of detecting
a polarity mismatch, indicating if the textual content of the review is in
line, or not, with the associated score.
Using well established artificial intelligence techniques and analyzing in
depth the reviews featuring a mismatch between the text polarity and the score,
we find that -on a scale of five stars- those reviews ranked with middle scores
include a mixture of positive and negative aspects.
The approach proposed here, beside acting as a polarity detector, provides an
effective selection of reviews -on an initial very large dataset- that may
allow both consumers and providers to focus directly on the review subset
featuring a text/score disagreement, which conveniently convey to the user a
summary of positive and negative features of the review target.Comment: This is the accepted version of the paper. The final version will be
published in the Journal of Cognitive Computation, available at Springer via
http://dx.doi.org/10.1007/s12559-017-9496-
- …