183 research outputs found

    Layered evaluation of multi-criteria collaborative filtering for scientific paper recommendation

    Get PDF
    Recommendation algorithms have been researched extensively to help people deal with abundance of information. In recent years, the incorporation of multiple relevance criteria has attracted increased interest. Such multi-criteria recommendation approaches are researched as a paradigm for building intelligent systems that can be tailored to multiple interest indicators of end-users – such as combinations of implicit and explicit interest indicators in the form of ratings or ratings on multiple relevance dimensions. Nevertheless, evaluation of these recommendation techniques in the context of real-life applications still remains rather limited. Previous studies dealing with the evaluation of recommender systems have outlined that the performance of such algorithms is often dependent on the dataset – and indicate the importance of carrying out careful testing and parameterization. Especially when looking at large scale datasets, it becomes very difficult to deploy evaluation methods that may help in assessing the effect that different system components have to the overall design. In this paper, we study how layered evaluation can be applied for the case of a multi-criteria recommendation service that we plan to deploy for paper recommendation using the Mendeley dataset. The paper introduces layered evaluation and suggests two experiments that may help assess the components of the envisaged system separately. Keywords: Recommender systems; Multi-Criteria Decision Making (MCDM); Evaluatio

    Design and evaluation issues for user-centric online product search

    Get PDF
    Nowadays more and more people are looking for products online, and a massive amount of products are being sold through e-commerce systems. It is crucial to develop effective online product search tools to assist users to find their desired products and to make sound purchase decisions. Currently, most existing online product search tools are not very effective in helping users because they ignore the fact that users only have limited knowledge and computational capacity to process the product information. For example, a search tool may ask users to fill in a form with too many detailed questions, and the search results may either be too minimal or too vast to consider. Such system-centric designs of online product search tools may cause some serious problems to end-users. Most of the time users are unable to state all their preferences at one time, so the search results may not be very accurate. In addition, users can either be impatient to view too much product information, or feel lost when no product appears in the search results during the interaction process. User-centric online product search tools can be developed to solve these problems and to help users make buying decisions effectively. The search tool should have the ability to recommend suitable products to meet the user's various preferences. In addition, it should help the user navigate the product space and reach the final target product without too much effort. Furthermore, according to behavior decision theory, users are likely to construct their preferences during the decision process, so the tool should be designed in an interactive way to elicit users' preferences gradually. Moreover, it should be decision supportive for users to make accurate purchasing decisions even if they don't have detail domain knowledge of the specific products. To develop effective user-centric online product search tools, one important task is to evaluate their performance so that system designers can obtain prompt feedback. Another crucial task is to design new algorithms and new user interfaces of the tools so that they can help users find the desired products more efficiently. In this thesis, we first consider the evaluation issue by developing a simulation environment to analyze the performance of generic product search tools. Compared to earlier evaluation methods that are mainly based on real-user studies, this simulation environment is faster and less expensive. Then we implement the CritiqueShop system, an online product search tool based on the well-known critiquing technique with two aspects of novelties: a user-centric compound critiquing generation algorithm which generates search results efficiently, and a visual user interface for enhancing user's satisfaction degree. Both the algorithm and the user interface are validated by large-scale comparative real-user studies. Moreover, the collaborative filtering approach is widely used to help people find low-risk products in domains such as movies or books. Here we further propose a recursive collaborative filtering approach that is able to generate search results more accurately without requiring additional effort from the users

    All you need is ratings: A clustering approach to synthetic rating datasets generation

    Get PDF
    The public availability of collections containing user preferences is of vital importance for performing offline evaluations in the field of recommender systems. However, the number of rating datasets is limited because of the costs required for their creation and the fear of violating the privacy of the users by sharing them. For this reason, numerous research attempts investigated the creation of synthetic collections of ratings using generative approaches. Nevertheless, these datasets are usually not reliable enough for conducting an evaluation campaign. In this paper, we propose a method for creating synthetic datasets with a configurable number of users that mimic the characteristics of already existing ones. We empirically validated the proposed approach by exploiting the synthetic datasets for evaluating different recommenders and by comparing the results with the ones obtained using real datasets

    A Robust and Optimal Multidisciplinary Approach For Space Systems Conceptual Design

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey

    Get PDF
    The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.Laboratorio de Investigación y Formación en Informática Avanzad

    Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey

    Get PDF
    The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.Laboratorio de Investigación y Formación en Informática Avanzad

    Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Evaluating sets of multi-attribute alternatives with uncertain preferences

    Get PDF
    In a decision-making problem, there can be uncertainty regarding the user preferences concerning the available alternatives. Thus, for a decision support system, it is essential to analyse the user preferences to make personalised recommendations. In this thesis we focus on Multiattribute Utility Theory (MAUT) which aims to define user preference models and elicitation procedures for alternatives evaluated with a vector of a fixed number of conflicting criteria. In this context, a preference model is usually represented with a real value function over the criteria used to evaluate alternatives, and an elicitation procedure is a process of defining such value function. The most preferred alternative will then be the one that maximises the value function. With MAUT models, it is common to represent the uncertainty of the user preferences with a parameterised value function. Each instantiation of this parameterisation then represents a user preference model compatible with the preference information collected so far. For example, a common linear value function is the weighted sum of the criteria evaluating an alternative, which is parameterised with respect to the set of weights. We focus on this type of preference models and in particular on value functions evaluating sets of alternatives rather single alternatives. These value functions can be used for example to define if a set of alternatives is preferred to another one, or which is the worst-case loss in terms of utility units of recommending a set of alternatives. We define the concept of setwise minimal equivalent subset (SME) and algorithms for its computation. Briefly, SME is the subset of an input set of alternatives with equivalent value function and minimum cardinality. We generalise standard preference relations used to compare single alternatives with the purpose of comparing sets of alternatives. We provide computational procedures to compute SME and evaluate preference relations with particular focus on linear value functions. We make extensive use of the Minimax Regret criterion, which is a common method to evaluate alternatives for potential questions and recommendations with uncertain value functions. It prescribes an outcome that minimises the worst-case loss with respect to all the possible parameterisation of the value function. In particular, we focus on its setwise generalisation, namely \textit{Setwise Minimax Regret} (SMR), which is the worst-case loss of recommending a set of alternatives. We provide a novel and efficient procedure for the computation of the SMR when supposing a linear value function. We also present a novel incremental preference elicitation framework for a supplier selection process, where a realistic medium-size factory inspires constraints and objectives of the underlying optimization problem. This preference elicitation framework applies for generic multiattribute combinatorial problems based on a linear preference model, and it is particularly useful when the computation of the set of Pareto optimal alternatives is practically unfeasible
    • …
    corecore