992 research outputs found

    Modeling User Preferences in Recommender Systems: A Classification Framework for Explicit and Implicit User Feedback

    Get PDF
    Recommender systems are firmly established as a standard technology for assisting users with their choices; however, little attention has been paid to the application of the user model in recommender systems, particularly the variability and noise that are an intrinsic part of human behavior and activity. To enable recommender systems to suggest items that are useful to a particular user, it can be essential to understand the user and his or her interactions with the system. These interactions typically manifest themselves as explicit and implicit user feedback that provides the key indicators for modeling users' preferences for items and essential information for personalizing recommendations. In this article, we propose a classification framework for the use of explicit and implicit user feedback in recommender systems based on a set of distinct properties that include Cognitive Effort, UserModel, Scale of Measurement, and Domain Relevance.We develop a set of comparison criteria for explicit and implicit user feedback to emphasize the key properties. Using our framework, we provide a classification of recommender systems that have addressed questions about user feedback, and we review state-of-the-art techniques to improve such user feedback and thereby improve the performance of the recommender system. Finally, we formulate challenges for future research on improvement of user feedback. © 2014 ACM

    Presentation Bias in movie recommendation algorithms

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization Information Analysis and ManagementThe emergence of video on demand (VOD) has transformed the way the content finds its audience. Several improvements have been made on algorithms to provide better movie recommendations to individuals. Given the huge variety of elements that characterize a film (such as casting, genre, soundtrack, amongst others artistic and technical aspects) and that characterize individuals, most of the improvements relied on accomplishing those characteristics to do a better job regarding matching potential clients to each product. However, little attention has been given to evaluate how the algorithms’ result selection are affected by presentation bias. Understanding bias is key to choosing which algorithms will be used by the companies. The existence of a system with presentation bias and feedback loop is already a problem stated by Netflix. In this sense, this research will fill that gap providing a comparative analysis of the bias of the major movie recommendation algorithms

    Evaluating Rank-Coherence of Crowd Rating in Customer Satisfaction

    Get PDF
    AbstractCrowd rating is a continuous and public process of data gathering that allows the display of general quantitative opinions on a topic from online anonymous networks as they are crowds. Online platforms leveraged these technologies to improve predictive tasks in marketing. However, we argue for a different employment of crowd rating as a tool of public utility to support social contexts suffering to adverse selection, like tourism. This aim needs to deal with issues in both method of measurement and analysis of data, and with common biases associated to public disclosure of rating information. We propose an evaluative method to investigate fairness of common measures of rating procedures with the peculiar perspective of assessing linearity of the ranked outcomes. This is tested on a longitudinal observational case of 7 years of customer satisfaction ratings, for a total amount of 26.888 reviews. According to the results obtained from the sampled dataset, analysed with the proposed evaluative method, there is a trade-off between loss of (potentially) biased information on ratings and fairness of the resulting rankings. However, computing an ad hoc unbiased ranking case, the ranking outcome through the time-weighted measure is not significantly different from the ad hoc unbiased case

    Innovation Tournaments: Improving Ideas through Process Models

    Get PDF
    Innovation tournaments have a long history of driving progress, especially in the fields of engineering and design, and are once again gaining popularity thanks to advances in technology. Stripped to its essence, an innovation tournament is a process that uncovers exceptionally good opportunities by considering many raw opportunities at the outset and selecting the best to survive. Both the host of the tournament (the administrator) and the participants (the agents) face many decisions throughout this process. In the following papers, we answer a series of questions about innovation tournaments, addressing the specific managerial challenges of how to provide in-process feedback, how to moderate entry visibility, and how to understand and affect leaps in innovation. We report on two sets of field experiments using web-based platforms for graphic design contests and a unique data set from an online platform dedicated to data prediction tournaments. The answers to these questions contribute new understanding to the literature on innovation tournaments and offer managers guidance on improving outcomes

    From Evaluating to Forecasting Performance: How to Turn Information Retrieval, Natural Language Processing and Recommender Systems into Predictive Sciences

    Full text link
    We describe the state-of-the-art in performance modeling and prediction for Information Retrieval (IR), Natural Language Processing (NLP) and Recommender Systems (RecSys) along with its shortcomings and strengths. We present a framework for further research, identifying five major problem areas: understanding measures, performance analysis, making underlying assumptions explicit, identifying application features determining performance, and the development of prediction models describing the relationship between assumptions, features and resulting performanc

    Big Data, Patents, and the Future of Medicine

    Get PDF
    Big data has tremendous potential to improve health care. Unfortunately, intellectual property law isn’t ready to support that leap. In the next wave of data- driven medicine, black-box medicine, researchers use sophisticated algorithms to examine huge troves of health data, finding complex, implicit relationships and making individualized assessments for patients. Black-box medicine offers potentially immense benefits, but also requires substantial high investment. Firms must develop new datasets, models, and validations, which are all nonrivalrous information goods with significant spillovers, requiring incentives for welfare-optimizing investment. Current intellectual property law fails to provide adequate incentives for black- box medicine. The Supreme Court has sharply restricted patentable subject matter in the recent Prometheus, Myriad, and Alice cases, and what might still be patentable is limited by the statutory requirements of written description and enablement. Other incentives for investment, such as trade secrecy or prizes, fail to fill the gaps. These limits push firms away from using big data in medicine to solve big problems, and push firms toward small-scale incremental innovation. Small tweaks to doctrine will help, but are not enough. Instead, the big data needed to support transformative medical innovation should be considered as infrastructure for innovation and should be the focus of substantial public effort
    • …
    corecore