3,814 research outputs found

    A study on text-score disagreement in online reviews

    Get PDF
    In this paper, we focus on online reviews and employ artificial intelligence tools, taken from the cognitive computing field, to help understanding the relationships between the textual part of the review and the assigned numerical score. We move from the intuitions that 1) a set of textual reviews expressing different sentiments may feature the same score (and vice-versa); and 2) detecting and analyzing the mismatches between the review content and the actual score may benefit both service providers and consumers, by highlighting specific factors of satisfaction (and dissatisfaction) in texts. To prove the intuitions, we adopt sentiment analysis techniques and we concentrate on hotel reviews, to find polarity mismatches therein. In particular, we first train a text classifier with a set of annotated hotel reviews, taken from the Booking website. Then, we analyze a large dataset, with around 160k hotel reviews collected from Tripadvisor, with the aim of detecting a polarity mismatch, indicating if the textual content of the review is in line, or not, with the associated score. Using well established artificial intelligence techniques and analyzing in depth the reviews featuring a mismatch between the text polarity and the score, we find that -on a scale of five stars- those reviews ranked with middle scores include a mixture of positive and negative aspects. The approach proposed here, beside acting as a polarity detector, provides an effective selection of reviews -on an initial very large dataset- that may allow both consumers and providers to focus directly on the review subset featuring a text/score disagreement, which conveniently convey to the user a summary of positive and negative features of the review target.Comment: This is the accepted version of the paper. The final version will be published in the Journal of Cognitive Computation, available at Springer via http://dx.doi.org/10.1007/s12559-017-9496-

    Measuring patient-perceived quality of care in US hospitals using Twitter

    Get PDF
    BACKGROUND: Patients routinely use Twitter to share feedback about their experience receiving healthcare. Identifying and analysing the content of posts sent to hospitals may provide a novel real-time measure of quality, supplementing traditional, survey-based approaches. OBJECTIVE: To assess the use of Twitter as a supplemental data stream for measuring patient-perceived quality of care in US hospitals and compare patient sentiments about hospitals with established quality measures. DESIGN: 404 065 tweets directed to 2349 US hospitals over a 1-year period were classified as having to do with patient experience using a machine learning approach. Sentiment was calculated for these tweets using natural language processing. 11 602 tweets were manually categorised into patient experience topics. Finally, hospitals with ≥50 patient experience tweets were surveyed to understand how they use Twitter to interact with patients. KEY RESULTS: Roughly half of the hospitals in the US have a presence on Twitter. Of the tweets directed toward these hospitals, 34 725 (9.4%) were related to patient experience and covered diverse topics. Analyses limited to hospitals with ≥50 patient experience tweets revealed that they were more active on Twitter, more likely to be below the national median of Medicare patients (p<0.001) and above the national median for nurse/patient ratio (p=0.006), and to be a non-profit hospital (p<0.001). After adjusting for hospital characteristics, we found that Twitter sentiment was not associated with Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) ratings (but having a Twitter account was), although there was a weak association with 30-day hospital readmission rates (p=0.003). CONCLUSIONS: Tweets describing patient experiences in hospitals cover a wide range of patient care aspects and can be identified using automated approaches. These tweets represent a potentially untapped indicator of quality and may be valuable to patients, researchers, policy makers and hospital administrators

    Second Screen User Profiling and Multi-level Smart Recommendations in the context of Social TVs

    Full text link
    In the context of Social TV, the increasing popularity of first and second screen users, interacting and posting content online, illustrates new business opportunities and related technical challenges, in order to enrich user experience on such environments. SAM (Socializing Around Media) project uses Social Media-connected infrastructure to deal with the aforementioned challenges, providing intelligent user context management models and mechanisms capturing social patterns, to apply collaborative filtering techniques and personalized recommendations towards this direction. This paper presents the Context Management mechanism of SAM, running in a Social TV environment to provide smart recommendations for first and second screen content. Work presented is evaluated using real movie rating dataset found online, to validate the SAM's approach in terms of effectiveness as well as efficiency.Comment: In: Wu TT., Gennari R., Huang YM., Xie H., Cao Y. (eds) Emerging Technologies for Education. SETE 201

    Feature extraction and classification of movie reviews

    Get PDF

    Hotel online reviews: creating a multi-source aggregated index

    Get PDF
    Purpose This paper aims to develop a model to predict online review ratings from multiple sources, which can be used to detect fraudulent reviews and create proprietary rating indexes, or which can be used as a measure of selection in recommender systems. Design/methodology/approach This study applies machine learning and natural language processing approaches to combine features derived from the qualitative component of a review with the corresponding quantitative component and, therefore, generate a richer review rating. Findings Experiments were performed over a collection of hotel online reviews – written in English, Spanish and Portuguese – which shows a significant improvement over the previously reported results, and it not only demonstrates the scientific value of the approach but also strengthens the value of review prediction applications in the business environment. Originality/value This study shows the importance of building predictive models for revenue management and the application of the index generated by the model. It also demonstrates that, although difficult and challenging, it is possible to achieve valuable results in the application of text analysis across multiple languagesinfo:eu-repo/semantics/acceptedVersio

    Word of Mouth, the Importance of Reviews and Ratings in Tourism Marketing

    Get PDF
    The Internet and social media have given place to what is commonly known as the democratization of content and this phenomenon is changing the way that consumers and companies interact. Business strategies are shifting from influencing consumers directly and induce sales to mediating the influence that Internet users have on each other. A consumer review is “a mixture of fact and opinion, impression and sentiment, found and unfound tidbits, experiences, and even rumor” (Blackshaw & Nazarro, 2006). Consumers' comments are seen as honest and transparent, but it is their subjective perception what shapes the behavior of other potential consumers. With the emergence of the Internet, tourists search for information and reviews of destinations, hotels or services. Several studies have highlighted the great influence of online reputation through reviews and ratings and how it affects purchasing decisions by others (Schuckert, Liu, & Law, 2015). These reviews are seen as unbiased and trustworthy, and considered to reduce uncertainty and perceived risks (Gretzel & Yoo, 2008; Park & Nicolau, 2015). Before choosing a destination, tourists are likely to spend a significant amount of time searching for information including reviews of other tourists posted on the Internet. The average traveler browses 38 websites prior to purchasing vacation packages (Schaal, 2013), which may include tourism forums, online reviews in booking sites and other generic social media websites such as Facebook and Twitter.Peer reviewedFinal Accepted Versio

    How Do Consumers Evaluate the Identical Product on Competing Online Retailers? A Big Data Analysis Approach Using Consumer Reviews

    Get PDF
    For big data analysis practice, this study collected both types of consumer review data, a structured form (i.e., review ratings) and an unstructured form (i.e., review text), on a fashion item from two different online retailers. Using the collected data, this study aims to identify 1) consumers\u27 evaluation criteria on a fashion product, 2) positive or negative sentiment toward the product, and 2) the impact of these identified variables on consumers\u27 ratings. For online retailers, Amazon.com and Macys.com were selected for comparison. The results identified six evaluation criteria from consumer reviews such as authenticity and inside design and revealed that Macy\u27s online consumers are, in general, more satisfied with the fashion product and have more specific evaluation criteria satisfying themselves, compared to Amazon.com\u27s consumers. The results suggest that product and service attributes influencing consumers\u27 satisfaction and evaluation are different across online retailers and its consumers, even on the same product

    Assessing gender inequality from large scale online student reviews

    Get PDF
    Career growth in academia is often dependent on student reviews of university profes- sors. A growing concern is how evaluation of teaching has been affected by gender biases throughout the reviewing process. However, pinpointing the exact causes and consequen- tial effects of this form of gender inequality has been a hard task. Current work focusses on university-wide student reviewing system, that depends on objective responses on a Likert scale to measure various aspects of an instructor’s qual- ity. Through our work, we access online student review data which are not limited by geographies, universities, or disciplines. Thereafter, we come up with a systematic approach to assess the various ways in which gender inequality is apparent from the student reviews. We also suggest a possible way in which bias related to the gender of a professor could be detected from both objective numerical measures and subjective opinions in reviews. Finally, we assess a logistic re- gression learning algorithm to find the most important factors that can help in identifying gender inequality

    Numeric Forced Rank: A Lightweight Method for Comparison and Decision-making

    Get PDF
    Comparing products, features, brands, or ideas relative to one another is a common goal in user experience (UX) and market research. While Likert-type scales and ordinal stack ranks are often employed as prioritization methods, they are subject to several psychometric shortcomings. We introduce the numeric forced rank, a lightweight approach that overcomes some of the limitations of standard methods and allows researchers to collect absolute ratings, relative preferences, and subjective comments using a single scale. The approach is optimal for UX and market research, but is also easily employed as a structured decision-making exercise outside of consumer research. We describe how the numeric forced rank was used to determine the name of a new Google Cloud Platform (GCP) feature, present the findings, and make recommendations for future research
    • …
    corecore