193 research outputs found
Context-aware movie recommendations: An empirical comparison of pre-filtering, post-filtering and contextual modeling approaches
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-39878-0_13Proceedings of 14th International Conference, EC-Web 2013, Prague, Czech Republic, August 27-28, 2013.Context-aware recommender systems have been proven to improve the performance of recommendations in a wide array of domains and applications. Despite individual improvements, little work has been done on comparing different approaches, in order to determine which of them outperform the others, and under what circumstances. In this paper we address this issue by conducting an empirical comparison of several pre-filtering, post-filtering and contextual modeling approaches on the movie recommendation domain. To acquire confident contextual information, we performed a user study where participants were asked to rate movies, stating the time and social companion with which they preferred to watch the rated movies. The results of our evaluation show that there is neither a clear superior contextualization approach nor an always best contextual signal, and that achieved improvements depend on the recommendation algorithm used together with each contextualization approach. Nonetheless, we conclude with a number of cues and advices about which particular combinations of contextualization approaches and recommendation algorithms could be better suited for the movie recommendation domain.This work was supported by the Spanish Government
(TIN2011-28538-C02) and the Regional Government of Madrid (S2009TIC-1542
Some discussions about MOGAs: individual relations, non-dominated set, and application on automatic negotiation
Semi-Supervised Self-Training for Sentence Subjectivity Classification
Recent natural language processing (NLP) research shows that identifying and extracting subjective information from texts can benefit many NLP applications. In this paper, we address a semi-supervised learning approach, self-training, for sentence subjectivity classification. In self-training, the confidence degree that depends on the ranking of class membership probabilities is commonly used as the selection metric that ranks and selects the unlabeled instances for next training of underlying classifier. Naive Bayes (NB) is often used as the underlying classifier because its class membership probability estimates have good ranking performance. The first contribution of this paper is to study the performance of self-training using decision tree models, such as C4.5, C4.4, and naive Bayes tree (NBTree), as the underlying classifiers. The second contribution is that we propose an adapted Value Difference Metric (VDM) as the selection metric in self-training, which does not depend on class membership probabilities. Based on the Multi-Perspective Question Answering (MPQA) corpus, a set of experiments have been designed to compare the performance of self-training with different underlying classifiers using different selection metrics under various conditions. The experimental results show that the performance of self-training is improved by using VDM instead of the confidence degree, and self-training with NBTree and VDM outperforms self-training with other combinations of underlying classifiers and selection metrics. The results also show that the self-training approach can achieve comparable performance to the supervised learning models.Les recherches r\ue9centes sur le traitement des langues naturelles montre que l'identification et l'extraction d'information subjective \ue0 partir de textes peuvent contribuer grandement \ue0 de nombreuses applications du traitement des langues naturelles. Dans ce document, nous traitons d'une approche faisant appel \ue0 l'apprentissage semi-supervis\ue9 en vue de la classification de la subjectivit\ue9 des phrases. En auto-apprentissage, le degr\ue9 de confiance, qui est fonction de l'ordonnancement des probabilit\ue9s d'appartenance \ue0 des classes, est souvent utilis\ue9 comme param\ue8tre de s\ue9lection qui ordonne par rangs et s\ue9lectionne les instances non \ue9tiquet\ue9es pour l'apprentissage subs\ue9quent appliqu\ue9 au classificateur sous-jacent. Le classificateur bay\ue9sien na\ueff (NB) est souvent utilis\ue9 comme classificateur sous-jacent parce que ses estim\ue9s de probabilit\ue9 d'appartenance \ue0 une classe pr\ue9sentent une bonne performance sur le plan de l'ordonnancement. La premi\ue8re contribution du pr\ue9sent document est l'\ue9tude des performances de l'auto-apprentissage au moyen de mod\ue8les d'arbres de d\ue9cision comme C4.5, C4.4 et de l'arbre bay\ue9sien na\ueff, comme classificateurs sous-jacents. Notre seconde contribution consiste \ue0 proposer un param\ue8tre de diff\ue9rence de valeur adapt\ue9 comme param\ue8tre de s\ue9lection en auto-apprentissage qui n'est pas fonction de probabilit\ue9s d'appartenance \ue0 une classe. Nous nous sommes bas\ue9s sur le corpus MPQA (r\ue9ponse \ue0 des interrogations \ue0 perspectives multiples) pour cr\ue9er un ensemble d'exp\ue9riences con\ue7ues afin de comparer les rendements de l'auto-apprentissage avec divers classificateurs sous-jacents utilisant des param\ue8tres de s\ue9lection diff\ue9rents dans diverses conditions. Les r\ue9sultats exp\ue9rimentaux montrent que le rendement de l'auto-apprentissage est am\ue9lior\ue9 lorsqu'on utilise des param\ue8tres de diff\ue9rence de valeur plut\uf4t qu'un niveau de confiance et que l'auto-apprentissage effectu\ue9 avec un arbre bay\ue9sien na\ueff et des param\ue8tres de diff\ue9rence de valeur pr\ue9sente de meilleures performances que l'auto-apprentissage effectu\ue9 avec d'autres combinaisons de classificateurs sous-jacents et param\ue8tres de s\ue9lection. Il est aussi d\ue9montr\ue9 que la d\ue9marche d'auto-apprentissage produit des rendements comparables aux mod\ue8les d'apprentissage supervis\ue9s.NRC publication: Ye
- …
