3,591 research outputs found
A Location-Sentiment-Aware Recommender System for Both Home-Town and Out-of-Town Users
Spatial item recommendation has become an important means to help people
discover interesting locations, especially when people pay a visit to
unfamiliar regions. Some current researches are focusing on modelling
individual and collective geographical preferences for spatial item
recommendation based on users' check-in records, but they fail to explore the
phenomenon of user interest drift across geographical regions, i.e., users
would show different interests when they travel to different regions. Besides,
they ignore the influence of public comments for subsequent users' check-in
behaviors. Specifically, it is intuitive that users would refuse to check in to
a spatial item whose historical reviews seem negative overall, even though it
might fit their interests. Therefore, it is necessary to recommend the right
item to the right user at the right location. In this paper, we propose a
latent probabilistic generative model called LSARS to mimic the decision-making
process of users' check-in activities both in home-town and out-of-town
scenarios by adapting to user interest drift and crowd sentiments, which can
learn location-aware and sentiment-aware individual interests from the contents
of spatial items and user reviews. Due to the sparsity of user activities in
out-of-town regions, LSARS is further designed to incorporate the public
preferences learned from local users' check-in behaviors. Finally, we deploy
LSARS into two practical application scenes: spatial item recommendation and
target user discovery. Extensive experiments on two large-scale location-based
social networks (LBSNs) datasets show that LSARS achieves better performance
than existing state-of-the-art methods.Comment: Accepted by KDD 201
Identifying Consumer Preferences From User-generated Content On Amazon.com By Leveraging Machine Learning
Inexperienced consumers may have high uncertainty about experience goods that require technical knowledge and skills to operate effectively; therefore, experienced consumers\u27 prior reviews can be useful for inexperienced consumers. However, one-sided review systems (e.g., Amazon) only provide the opportunity for consumers to write a review as a buyer and contain no feedback from the seller\u27s side, so the information displayed about individual buyers is limited. Therefore, this study analyzes consumers\u27 digital footprints (DFs) for programmable thermostats to identify and predict unobserved consumer preferences, using a dataset of 141 million Amazon reviews. This paper proposes novel approaches (1) to identify unobserved consumer characteristics and preferences by analyzing the target consumers\u27 and other prior reviewers\u27 DFs; (2) to extract product-specific product content dimensions (PCDs) from review text data; (3) to predict individual consumers\u27 sentiment before they make a purchase or write a review; (4) to classify consumers\u27 sentiment toward a specific PCD by using context-based word embedding and deep learning models. Overall, this approach developed in this paper is applicable, scalable, and interpretable for distinguishing important drivers of consumer reviews for different goods in a specific industry and can be used by industry to design customer-oriented marketing strategies
TripleSent: a triple store of events associated with their prototypical sentiment
The current generation of sentiment analysis
systems is limited in their real-world applicability because they
cannot detect utterances that implicitly carry positive or negative
sentiment. We present early stage research ideas to address this
inability with the development of a dynamic triple store of events
associated with their prototypical sentiment
Modeling Crowd Feedback in the Mobile App Market
Mobile application (app) stores, such as Google Play and the Apple App Store, have recently emerged as a new model of online distribution platform. These stores have expanded in size in the past five years to host millions of apps, offering end-users of mobile software virtually unlimited options to choose from. In such a competitive market, no app is too big to fail. In fact, recent evidence has shown that most apps lose their users within the first 90 days after initial release. Therefore, app developers have to remain up-to-date with their end-users’ needs in order to survive. Staying close to the user not only minimizes the risk of failure, but also serves as a key factor in achieving market competitiveness as well as managing and sustaining innovation. However, establishing effective communication channels with app users can be a very challenging and demanding process. Specifically, users\u27 needs are often tacit, embedded in the complex interplay between the user, system, and market components of the mobile app ecosystem. Furthermore, such needs are scattered over multiple channels of feedback, such as app store reviews and social media platforms. To address these challenges, in this dissertation, we incorporate methods of requirements modeling, data mining, domain engineering, and market analysis to develop a novel set of algorithms and tools for automatically classifying, synthesizing, and modeling the crowd\u27s feedback in the mobile app market. Our analysis includes a set of empirical investigations and case studies, utilizing multiple large-scale datasets of mobile user data, in order to devise, calibrate, and validate our algorithms and tools. The main objective is to introduce a new form of crowd-driven software models that can be used by app developers to effectively identify and prioritize their end-users\u27 concerns, develop apps to meet these concerns, and uncover optimized pathways of survival in the mobile app ecosystem
Automated Crowdturfing Attacks and Defenses in Online Review Systems
Malicious crowdsourcing forums are gaining traction as sources of spreading
misinformation online, but are limited by the costs of hiring and managing
human workers. In this paper, we identify a new class of attacks that leverage
deep learning language models (Recurrent Neural Networks or RNNs) to automate
the generation of fake online reviews for products and services. Not only are
these attacks cheap and therefore more scalable, but they can control rate of
content output to eliminate the signature burstiness that makes crowdsourced
campaigns easy to detect.
Using Yelp reviews as an example platform, we show how a two phased review
generation and customization attack can produce reviews that are
indistinguishable by state-of-the-art statistical detectors. We conduct a
survey-based user study to show these reviews not only evade human detection,
but also score high on "usefulness" metrics by users. Finally, we develop novel
automated defenses against these attacks, by leveraging the lossy
transformation introduced by the RNN training and generation cycle. We consider
countermeasures against our mechanisms, show that they produce unattractive
cost-benefit tradeoffs for attackers, and that they can be further curtailed by
simple constraints imposed by online service providers
Harnessing the power of the general public for crowdsourced business intelligence: a survey
International audienceCrowdsourced business intelligence (CrowdBI), which leverages the crowdsourced user-generated data to extract useful knowledge about business and create marketing intelligence to excel in the business environment, has become a surging research topic in recent years. Compared with the traditional business intelligence that is based on the firm-owned data and survey data, CrowdBI faces numerous unique issues, such as customer behavior analysis, brand tracking, and product improvement, demand forecasting and trend analysis, competitive intelligence, business popularity analysis and site recommendation, and urban commercial analysis. This paper first characterizes the concept model and unique features and presents a generic framework for CrowdBI. It also investigates novel application areas as well as the key challenges and techniques of CrowdBI. Furthermore, we make discussions about the future research directions of CrowdBI
User Review-Based Change File Localization for Mobile Applications
In the current mobile app development, novel and emerging DevOps practices
(e.g., Continuous Delivery, Integration, and user feedback analysis) and tools
are becoming more widespread. For instance, the integration of user feedback
(provided in the form of user reviews) in the software release cycle represents
a valuable asset for the maintenance and evolution of mobile apps. To fully
make use of these assets, it is highly desirable for developers to establish
semantic links between the user reviews and the software artefacts to be
changed (e.g., source code and documentation), and thus to localize the
potential files to change for addressing the user feedback. In this paper, we
propose RISING (Review Integration via claSsification, clusterIng, and
linkiNG), an automated approach to support the continuous integration of user
feedback via classification, clustering, and linking of user reviews. RISING
leverages domain-specific constraint information and semi-supervised learning
to group user reviews into multiple fine-grained clusters concerning similar
users' requests. Then, by combining the textual information from both commit
messages and source code, it automatically localizes potential change files to
accommodate the users' requests. Our empirical studies demonstrate that the
proposed approach outperforms the state-of-the-art baseline work in terms of
clustering and localization accuracy, and thus produces more reliable results.Comment: 15 pages, 3 figures, 8 table
How Do Crowd-Users Express Their Opinions Against Software Applications in Social Media? A Fine-Grained Classification Approach
© 2024 The Author(s). This is an open access article under the Creative Commons Attribution-Non Commercial-No Derivatives CC BY-NC-ND licence, https://creativecommons.org/licenses/by-nc-nd/4.0/App stores allow users to search, download, and purchase software applications to accomplish daily tasks. Also, they enable crowd-users to submit textual feedback or star ratings to the downloaded software apps based on their satisfaction. Recently, crowd-user feedback contains critical information for software developers, including new features, issues, non-functional requirements, etc. Previously, identifying software bugs in low-star software applications was ignored in the literature. For this purpose, we proposed a natural language processing-based (NLP) approach to recover frequently occurring software issues in the Amazon Software App (ASA) store. The proposed approach identified prevalent issues using NLP part-of-speech (POS) analytics. Also, to better understand the implications of these issues on end-user satisfaction, different machine learning (ML) algorithms are used to identify crowd-user emotions such as anger, fear, sadness, and disgust with the identified issues. To this end, we shortlisted 45 software apps with comparatively low ratings from the ASA Store. We investigated how crowd-users reported their grudges and opinions against the software applications using the grounded theory & content analysis approaches and prepared a grounded truth for the ML experiments. ML algorithms, such as MNB, LR, RF, MLP, KNN, AdaBoost, and Voting Classifier, are used to identify the associated emotions with each captured issue by processing the annotated end-user data set. We obtained satisfactory classification results, with MLP and RF classifiers having 82% and 80% average accuracies, respectively. Furthermore, the ROC curves for better-performing ML classifiers are plotted to identify the best-performing under or oversampling classifier to be selected as the final best classifier. Based on our knowledge, the proposed approach is considered the first step in identifying frequently occurring issues and corresponding end-user emotions for low-ranked software applications. The software vendors can utilize the proposed approach to improve the performance of low-ranked software apps by incorporating it into the software evolution process promptly.Peer reviewe
- …