1,163 research outputs found

    Trust and Reputation Modelling for Tourism Recommendations Supported by Crowdsourcing

    Get PDF
    Tourism crowdsourcing platforms have a profound influence on the tourist behaviour particularly in terms of travel planning. Not only they hold the opinions shared by other tourists concerning tourism resources, but, with the help of recommendation engines, are the pillar of personalised resource recommendation. However, since prospective tourists are unaware of the trustworthiness or reputation of crowd publishers, they are in fact taking a leap of faith when then rely on the crowd wisdom. In this paper, we argue that modelling publisher Trust & Reputation improves the quality of the tourism recommendations supported by crowdsourced information. Therefore, we present a tourism recommendation system which integrates: (i) user profiling using the multi-criteria ratings; (ii) k-Nearest Neighbours (k-NN) prediction of the user ratings; (iii) Trust & Reputation modelling; and (iv) incremental model update, i.e., providing near real-time recommendations. In terms of contributions, this paper provides two different Trust & Reputation approaches: (i) general reputation employing the pairwise trust values using all users; and (ii) neighbour-based reputation employing the pairwise trust values of the common neighbours. The proposed method was experimented using crowdsourced datasets from Expedia and TripAdvisor platforms.info:eu-repo/semantics/publishedVersio

    Profiling and Rating Prediction from Multi-Criteria Crowd-Sourced Hotel Ratings

    Get PDF
    ECMS 2017- 31st European Conference on Modelling and Simulation - May 23rd - May 26th, 2017 Budapest, HungaryBased on historical user information, collaborative filters predict for a given user the classification of unknown items, typically using a single criterion. However, a crowd typically rates tourism resources using multi-criteria, i.e., each user provides multiple ratings per item. In order to apply standard collaborative filtering, it is necessary to have a unique classification per user and item. This unique classification can be based on a single rating – single criterion (SC) profiling – or on the multiple ratings available – multicriteria (MC) profiling. Exploring both SC and MC profiling, this work proposes: (ı) the selection of the most representative crowd-sourced rating; and (ıı) the combination of the different user ratings per item, using the average of the non-null ratings or the personalised weighted average based on the user rating profile. Having employed matrix factorisation to predict unknown ratings, we argue that the personalised combination of multi-criteria item ratings improves the tourist profile and, consequently, the quality of the collaborative predictions. Thus, this paper contributes to a novel approach for guest profiling based on multi-criteria hotel ratings and to the prediction of hotel guest ratings based on the Alternating Least Squares algorithm. Our experiments with crowd-sourced Expedia and TripAdvisor data show that the proposed method improves the accuracy of the hotel rating predictions.info:eu-repo/semantics/publishedVersio

    Why We Need New Evaluation Metrics for NLG

    Full text link
    The majority of NLG evaluation relies on automatic metrics, such as BLEU . In this paper, we motivate the need for novel, system- and data-independent automatic evaluation methods: We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. We also show that metric performance is data- and system-specific. Nevertheless, our results also suggest that automatic metrics perform reliably at system-level and can support system development by finding cases where a system performs poorly.Comment: accepted to EMNLP 201

    Harnessing the power of the general public for crowdsourced business intelligence: a survey

    Get PDF
    International audienceCrowdsourced business intelligence (CrowdBI), which leverages the crowdsourced user-generated data to extract useful knowledge about business and create marketing intelligence to excel in the business environment, has become a surging research topic in recent years. Compared with the traditional business intelligence that is based on the firm-owned data and survey data, CrowdBI faces numerous unique issues, such as customer behavior analysis, brand tracking, and product improvement, demand forecasting and trend analysis, competitive intelligence, business popularity analysis and site recommendation, and urban commercial analysis. This paper first characterizes the concept model and unique features and presents a generic framework for CrowdBI. It also investigates novel application areas as well as the key challenges and techniques of CrowdBI. Furthermore, we make discussions about the future research directions of CrowdBI

    Profiling users' behavior, and identifying important features of review 'helpfulness'

    Get PDF
    The increasing volume of online reviews and the use of review platforms leave tracks that can be used to explore interesting patterns. It is in the primary interest of businesses to retain and improve their reputation. Reviewers, on the other hand, tend to write reviews that can influence and attract people’s attention, which often leads to deliberate deviations from past rating behavior. Until now, very limited studies have attempted to explore the impact of user rating behavior on review helpfulness. However, there are more perspectives of user behavior in selecting and rating businesses that still need to be investigated. Moreover, previous studies gave more attention to the review features and reported inconsistent findings on the importance of the features. To fill this gap, we introduce new and modify existing business and reviewer features and propose a user-focused mechanism for review selection. This study aims to investigate and report changes in business reputation, user choice, and rating behavior through descriptive and comparative analysis. Furthermore, the relevance of various features for review helpfulness is identified by correlation, linear regression, and negative binomial regression. The analysis performed on the Yelp dataset shows that the reputation of the businesses has changed slightly over time. Moreover, 46% of the users chose a business with a minimum of 4 stars. The majority of users give 4-star ratings, and 60% of reviewers adopt irregular rating behavior. Our results show a slight improvement by using user rating behavior and choice features. Whereas, the significant increase in R2 indicates the importance of reviewer popularity and experience features. The overall results show that the most significant features of review helpfulness are average user helpfulness, number of user reviews, average business helpfulness, and review length. The outcomes of this study provide important theoretical and practical implications for researchers, businesses, and reviewers

    Explanation plug-in for stream-based collaborative filtering

    Get PDF
    Collaborative filtering is a widely used recommendation technique, which often relies on rating information shared by users, i.e., crowdsourced data. These filters rely on predictive algorithms, such as, memory or model based predictors, to build direct or latent user and item profiles from crowdsourced data. To predict unknown ratings, memory-based approaches rely on the similarity between users or items, whereas model-based mechanisms explore user and item latent profiles. However, many of these filters are opaque by design, leaving users with unexplained recommendations. To overcome this drawback, this paper introduces Explug, a local model-agnostic plug-in that works alongside stream-based collaborative filters to reorder and explain recommendations. The explanations are based on incremental user Trust & Reputation profiling and co-rater relationships. Experiments performed with crowdsourced data from TripAdvisor show that Explug explains and improves the quality of stream-based collaborative filter recommendations.Xunta de Galicia | Ref. ED481B-2021-118Fundação para a Ciência e a Tecnologia | Ref. UIDB/50014/202

    Survey on Evaluation Methods for Dialogue Systems

    Get PDF
    In this paper we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost and time intensive. Thus, much work has been put into finding methods, which allow to reduce the involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented dialogue systems, conversational dialogue systems, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then by presenting the evaluation methods regarding this class
    corecore