97,064 research outputs found

    The effects of online customer reviews and online customer ratings on purchasing intentions in west java marketplaces

    Get PDF
    Technological developments are overgrowing; one of the technologies that are developing is the internet. Technological advances from the internet have led to changes in consumer lifestyles; the changes referred to are consumers who are interested in online shopping activities. However, online shopping is significantly different from offline shopping, where consumers cannot directly assess the product to be purchased. This difference is also a risk for potential consumers when shopping online. So to get information about the product that will be purchased, prospective consumers will use online customer reviews and online customer ratings in the marketplace. This study aims to determine whether online customer reviews and online customer ratings affect buying interest in the marketplace in West Java, either partially or simultaneously. The sample was determined by the non-probability sampling method with 140 respondents and using multiple regression analysis. Based on the study results, it is known that online customer reviews and online customer ratings have an effect of 56.8% on buying interest

    An Empirical Examination of Factors Influencing the Intention to Use Physician Rating Websites

    Get PDF
    Physician rating websites (PRWs) are social media platforms that enable patients to submit ratings and reviews of physicians. While numerous PRWs are available on the Internet and millions of physician reviews are posted on those websites, many people still do not use them when making clinical decisions. This study seeks to understand what factors impact intention to use PRWs. A sample of 109 students was employed. Each subject was randomly assigned to either RateMDs, Vitals, or Brigham and Women’s Hospital’s website. The subjects were asked to choose a primary care doctor based on the reviews posted on the assigned website and complete a survey accordingly. The regression analysis revealed that perceived credibility of reviewers and general use of online reviews influenced intention to use PRWs, whereas perceived integrity of website providers only moderated the relation between perceived credibility of reviewers and intention to use PRWs

    Don’t Lie To Me: Integrating Client-Side Web Scraping And Review Behavior Analysis To Detect Fake Reviews

    Get PDF
    User reviews are a widespread across the Internet as an indicator of the quality of a product. However, review systems can be vulnerable to attack. Malicious parties can manipulate the ratings of items by soliciting fake reviews in exchange for small payments. Sellers can use these fake reviews to hurt competitors or to promote their own products, artificially decreasing or increasing the ratings of products by paying for reviews. From previous work that uses crowdsourcing website postings to find fake reviews, we have a trained model that can detect fraudulent reviews using the time and rating features of reviews for a product (Kaghazgaran et. al "TOMCAT", ICWSM’19). This work also provides a web-based demo to validate the reliability of the reviews for a product. We encapsulate this model into a browser application that, when activated on an Amazon product page, crawls the reviews associated with that product, and issues a review manipulation score to the user. We also store the crawled reviews with the intention of building a dataset of reviews over time that can be used for further study into review manipulation and ways to improve review systems. Finally, we have analyzed the behavioral features of reviews using the dataset provided by previous work (Kaghazgaran et. al "TOMCAT", ICWSM’19) that contains a set of random products and reviews from products known to be targets of manipulation. This analysis has uncovered more possible methods of determining review manipulation on a product by looking at the average number of helpfulness votes that reviews on a product receive, the average title length of a product’s reviews, and the average length of a product’s reviews. However, we found that the average length of a product’s reviews was the feature that was most correlated within our dataset with review manipulation. We expect that future work focusing on the addition of these features will increase the overall effectiveness of the detection model

    Don’t Lie To Me: Integrating Client-Side Web Scraping And Review Behavior Analysis To Detect Fake Reviews

    Get PDF
    User reviews are a widespread across the Internet as an indicator of the quality of a product. However, review systems can be vulnerable to attack. Malicious parties can manipulate the ratings of items by soliciting fake reviews in exchange for small payments. Sellers can use these fake reviews to hurt competitors or to promote their own products, artificially decreasing or increasing the ratings of products by paying for reviews. From previous work that uses crowdsourcing website postings to find fake reviews, we have a trained model that can detect fraudulent reviews using the time and rating features of reviews for a product (Kaghazgaran et. al "TOMCAT", ICWSM’19). This work also provides a web-based demo to validate the reliability of the reviews for a product. We encapsulate this model into a browser application that, when activated on an Amazon product page, crawls the reviews associated with that product, and issues a review manipulation score to the user. We also store the crawled reviews with the intention of building a dataset of reviews over time that can be used for further study into review manipulation and ways to improve review systems. Finally, we have analyzed the behavioral features of reviews using the dataset provided by previous work (Kaghazgaran et. al "TOMCAT", ICWSM’19) that contains a set of random products and reviews from products known to be targets of manipulation. This analysis has uncovered more possible methods of determining review manipulation on a product by looking at the average number of helpfulness votes that reviews on a product receive, the average title length of a product’s reviews, and the average length of a product’s reviews. However, we found that the average length of a product’s reviews was the feature that was most correlated within our dataset with review manipulation. We expect that future work focusing on the addition of these features will increase the overall effectiveness of the detection model

    The impact of utilitarian product reviews on brand perception

    Get PDF
    The impact of online reviews on consumer behavior has been increasingly studied as online retail platforms have grown exponentially, and internet research used prior to purchasing products has become more common. However, limited research has examined the impact of those product reviews on the overall perception of the brands selling these products. This study exclusively looked at product reviews for high and low-involvement utilitarian products and analyzed how those reviews affect consumers\u27 perception of a brand. Taking a sample of 301 participants, findings showed that star ratings had a drastic effect on consumers\u27 perception of a brand, associating a low star-rated review with poor brand perception and vice versa. The research also found that low-involvement utilitarian products were highly affected by star ratings, especially concerning purchases of future products from that brand. Those findings suggest that for products associated with a low involvement thought process, consumers are willing to purchase different products from that brand purely from seeing a high-rated star review. However, for products associated with a higher involvement thought process, consumers will conduct more future research before deciding to purchase different products from that brand. Additionally, the findings strengthen the importance of a brand building its image and following, as they showcase how one visual review can deter consumers from wanting to buy not only a specific product but any other products from that brand

    Good advice is rarer than rubies: A study on online Tripadvisor hotel reviews

    Get PDF
    User-generated content websites, such as review sites or travel communities, have become a major source of information for travelers with the advent of Web 2.0. A recent study [1] showed that more than 40% of travelers use the reviews and comments of other consumers as information sources when planning trips. While many studies have investigated the use and influence of online reviews on consumers, less is known about what motivates travelers to write online reviews. According to the results of the Yoo & Gretze's survey [2], based on a panel of TripAdvisor reviewers, the motivation to write online travel reviews is accounted for by four dimensions: Enjoyment/positive self- enhancement, Venting negative feelings, Concerns for other consumers, and Helping the company. As a consequence, the motivation to review should be high after extremely good outcomes (due to enjoyment in sharing a good experience, helping the company that provided a good travel service) and extremely bad experiences (owing to engagement in negative word-of-mouth to warn others, see also [3]), and low after intermediate/neutral experiences. In this study we adopted a data-driven approach in order to test the empirical robustness of such an expectation. Specifically, we investigated the motivation to write online travel reviews, by analyzing a publicly available world-wide dataset of 246,399 user generated hotel reviews posted on TripAdvisor [4]. Following on the expectation rising from the four dimension model of motivation proposed by [2], if reviewers are highly motivated to write after extremely good and extremely bad experiences, the distribution of overall hotel ratings (ranging from 1 \u201cbubble\u201d, labeled as \u201cterrible\u201d, to 5 \u201cbubbles\u201d, labeled as \u201cexcellent\u201d) should be expected to be U-shaped, with central ratings being less represented than extreme values (1 \u201cbubble\u201d and 5 \u201cbubbles\u201d). The empirical distribution of ratings showed instead that the most represented ratings were 5 and 4 \u201cbubbles\u201d (accounting for 75% of all ratings), thus reflecting the fact that the majority of reviewers judged their experience in a monotonic continuum from very good to excellent. The same pattern of results emerged also in sub- ratings (business service \u2013 63%, cleanliness \u2013 81%, front desk \u2013 75%, location \u2013 84%, rooms \u2013 73%, service \u2013 75%, and value \u2013 73%). The monotonic pattern of responses revealed by our study demonstrated that engaging in negative word-of-mouth to vent negative feelings and to warn others may not be an important motivation for writing online reviews. We speculate that monotonicity could results from a positivity bias in remembering and evaluating hedonic experiences [5]. Acknowledgment: This research was supported by the University of Trieste FRA 2013 grant to CF. TSPC2015 November, 13th \u2013 P42 1. Xiang Z, Wang D, O'Leary JT, Fesenmaier DR. (2015). Adapting to the internet. Trends in travelers\u2019 use of the web for trip planning. Journal of Travel Research, 54:511-527. 2. Yoo KH, Gretzel U. (2008). What motivates consumers to write online travel reviews? Information Technology & Tourism, 10: 283-295. 3. Wetzer IM, Zeelenberg M, Pieters R. (2007). \u201cNever eat in that restaurant, I did!\u201d: Exploring why people engage in negative word\u2010of\u2010mouth communication. Psychology & Marketing, 24: 661-680. 4. Wang H, Lu Y, Zhai C. (2011). Latent aspect rating analysis without aspect keyword supervision. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 618-626). 5. Wirtz D, Kruger J, Scollon CN, Diener E. (2003). What to do on spring break? The role of predicted, on-line, and remembered experience in future choice. Psychological Science, 14: 520-524

    The Market for Fake Reviews

    Get PDF
    We study the market for fake product reviews on Amazon.com. These reviews are purchased in large private internet groups on Facebook and other sites. We hand-collect data on these markets to characterize the types of products that buy fake reviews and then collect large amounts of data on the ratings and reviews posted on Amazon for these products, as well as their sales rank, advertising, and pricing behavior. We use this data to assess the costs and benefits of fake reviews to sellers and evaluate the degree to which they harm consumers. The theoretical literature on review fraud shows that there exist conditions when they harm consumers and other conditions where they function as simply another type of advertising. Using detailed data on product outcomes before and after they buy fake reviews we can directly determine if these are low-quality products using fake reviews to deceive and harm consumers or if they are possibly high-quality products who solicit reviews to establish reputations. We find that a wide array of products purchase fake reviews including products with many reviews and high average ratings. Soliciting fake reviews on Facebook leads to a significant increase in average rating and sales rank, but the effect disappears after roughly one month. After firms stop buying fake reviews their average ratings fall significantly and the share of one-star reviews increases significantly, indicating fake reviews are mostly used by low quality products and are deceiving and harming consumers. We also observe that Amazon deletes large numbers of reviews and we document their deletion policy

    The Fundamentals And Fun Of Electronic Teamwork For Students And Their Instructors

    Get PDF
    This paper reviews and integrates best practices for online teamwork for students and instructors from current and classical literature as well as the author’s own six years of online teaching experience (over 40 online courses). A qualitative reflection of six graduate and six undergraduate courses in management, human resource management and organizational development using student teams via the internet were used in this study. An updated model of Tuckman’s (1965) team development process is offered. Additional reflection on the use of confidential, student peer ratings are given. Samples of student feedback on the team experience in their courses are summarized along with lessons learned for the instructor and the student

    Two Essays on Consumer-Generated Reviews: Reviewer Expertise and Mobile Reviews

    Get PDF
    Over the past few decades, the internet has risen to prominence, enabling consumers to not only quickly access large amounts of information, but also openly share content (e.g., blogs, videos, reviews) with a substantially large number of fellow consumers. Given the vast presence of consumers in the online space, it has become increasingly critical for marketers to better understand the way consumers share, and learn from, consumer-generated content, a research area known as electronic word-of-mouth. In this dissertation, I advance our understanding about the shared content generated by consumers on online review platforms. In Essay 1, I study why and how the expertise of consumers in generating reviews systematically shapes their rating evaluations and the downstream consequences this has on the aggregate valence metric. I theorize, and provide empirical evidence, that greater expertise in generating reviews leads to greater restraint from extremes in evaluations, which is driven by the number of attributes considered by reviewers. Further, I demonstrate two major consequences of this restraint-of-expertise effect. (i) Expert (vs. novice) reviewers have less impact on the aggregate valence metric, which is known to affect page-rank and consumer consideration. (ii) Experts systematically benefit and harm service providers with their ratings. For service providers that generally provide mediocre (excellent) experiences, experts assign significantly higher (lower) ratings than novices. Building on my investigation of expert reviewers, in Essay 2, I investigate the differential effects of generating reviews on mobile devices for expert and novice reviewers. I argue, based on Schema Theory, that expert and novice reviewers adopt different “strategies” in generating mobile reviews. Because of their review-writing experience, experts develop a review-writing schema, and compared to novices, place greater emphasis on the consistency of various review aspects, including emotionality of language and attribute coverage in their mobile reviews. Accordingly, although mobile (vs. desktop) reviews are shorter for both experts and novices, I show that experts (novice) generate mobile reviews that contain a slight (large) increase in emotional language and are more (less) attribute dense. Drawing on these findings, I advance managerial strategies for review platforms and service providers, and provide avenues for future research

    Examining employer-brand benefits through online employer reviews

    Get PDF
    Social media is rising in popularity as a credible source of information for consumers worldwide. Access to online product reviews appears limitless, and consumer voices are now influencing purchasing behavior far beyond the reach of traditional marketing campaigns. Joining the Internet influencers is a relatively new platform for sharing opinions, employer-review websites. Comments from current and former staff on employer review sits such as Glassdoor and Indeed offer a glimpse into company culture and the employer brand (Amber & Barrow, 1996). This qualitative, phenomenological study explored the lived experiences of hotel/casino resort employees through an examination of employer reviews posted on the Glassdoor and Indeed web pages of four Las Vegas gaming corporations. A thematic analysis of 1,063 employer reviews was conducted to identify the trio of employer-brand benefits (e.g., functional, economic, and psychological) drawn from Ambler and Barrow\u27s (1996) employer-brand equity theory. Themes related to social identity theory (Tajfel, 1974), signaling theory (Spence, 1973), and the instrumental-symbolic framework (e.g., Lievens & Highhouse, 2003) were examined in this study. Two questions guided the research: (1) Which employer-brand benefits, if any, cited in the employer reviews of hotel/casino resorts are most frequently associated with positive and negative employee sentiment? (2) What is the relationship between employer benefits (e.g., functional, psychological, and economical) and the overall employee rating given by the reviewer? The results revealed that all three of Ambler and Barrow\u27s (1996) employer-brand benefits appeared in the employer reviews as both positive and negative attributes of employment, with psychological and economic benefits most frequently referenced. Specific to employment in Las Vegas hotel/casino resort industry, reviewers who gave high employer ratings were quite positive about economic benefits (i.e., salary and wages, unspecified benefits, and the free meal in the EDR) and psychological benefits (i.e., co-worker interactions and company atmosphere), while reviewers who gave their employer low ratings were disappointed with their positions economic (i.e., salary and wages), psychological (i.e., management behaviors, work schedule, and company atmosphere), and functional (i.e., promotional opportunities) benefits. The findings from this study have implications for both marketing and HR practitioners, and this study contributes to the growing body of employer-branding literature
    • 

    corecore