3,538 research outputs found

    Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey

    Get PDF
    The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.Laboratorio de Investigación y Formación en Informática Avanzad

    Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey

    Get PDF
    The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.Laboratorio de Investigación y Formación en Informática Avanzad

    Share and reuse of context metadata resulting from interactions between users and heterogeneous web-based learning environments

    Get PDF
    L'intérêt pour l'observation, l'instrumentation et l'évaluation des systèmes éducatifs en ligne est devenu de plus en plus important ces dernières années au sein de la communauté des Environnements Informatique pour l'Apprentissage Humain (EIAH). La conception et le développement d'environnements d'apprentissage en ligne adaptatifs (AdWLE - Adaptive Web-based Learning Environments) représentent une préoccupation majeure aujourd'hui, et visent divers objectifs tels que l'aide au processus de réingénierie, la compréhension du comportement des utilisateurs, ou le soutient à la création de systèmes tutoriels intelligents. Ces systèmes gèrent leur processus d'adaptation sur la base d'informations détaillées reflétant le contexte dans lequel les étudiants évoluent pendant l'apprentissage : les ressour-ces consultées, les clics de souris, les messages postés dans les logiciels de messagerie instantanée ou les forums de discussion, les réponses aux questionnaires, etc. Les travaux présentés dans ce document sont destinés à surmonter certaines lacunes des systèmes actuels en fournissant un cadre dédié à la collecte, au partage et à la réutilisation du contexte représenté selon deux niveaux d'abstraction : le contexte brut (résultant des interactions directes entre utilisateurs et applications) et le contexte inféré (calculé à partir des données du contexte brut). Ce cadre de travail qui respecte la vie privée des usagers est fondé sur un standard ouvert dédié à la gestion des systèmes, réseaux et applications. Le contexte spécifique aux outils hétérogènes constituant les EIAHs est représenté par une structure unifiée et extensible, et stocké dans un référentiel central. Pour faciliter l'accès à ce référentiel, nous avons introduit une couche intermédiaire composée d'un ensemble d'outils. Certains d'entre eux permettent aux utilisateurs et applications de définir, collecter, partager et rechercher les données de contexte qui les intéressent, tandis que d'autres sont dédiés à la conception, au calcul et à la délivrance des données de contexte inférées. Pour valider notre approche, une mise en œuvre du cadre de travail proposé intègre des données contextuelles issues de trois systèmes différents : deux plates-formes d'apprentissage Moodle (celle de l'Université Paul Sabatier de Toulouse, et une autre déployée dans le cadre du projet CONTINT financé par l'Agence Nationale de la Recherche) et une instanciation locale du moteur de recherche de la fondation Ariadne. A partir des contextes collectés, des indicateurs pertinents ont été calculés pour chacun de ces environnements. En outre, deux applications qui exploitent cet ensemble de données ont été développées : un système de recommandation personnalisé d'objets pédagogiques ainsi qu'une application de visualisation fondée sur les technologies tactiles pour faciliter la navigation au sein de ces données de contexte.An interest for the observation, instrumentation, and evaluation of online educational systems has become more and more important within the Technology Enhanced Learning community in the last few years. Conception and development of Adaptive Web-based Learning Environments (AdWLE) in order to facilitate the process of re-engineering, to help understand users' behavior, or to support the creation of Intelligent Tutoring Systems represent a major concern today. These systems handle their adaptation process on the basis of detailed information reflecting the context in which students evolve while learning: consulted resources, mouse clicks, chat messages, forum discussions, visited URLs, quizzes selections, and so on. The works presented in this document are intended to overcome some issues of the actual systems by providing a privacy-enabled framework dedicated to the collect, share and reuse of context represented at two abstraction levels: raw context (resulting from direct interactions between users and applications) and inferred context (calculated on the basis of raw context). The framework is based on an open standard dedicated to system, network and application management, where the context specific to heterogeneous tools is represented as a unified and extensible structure and stored into a central repository. To facilitate access to this context repository, we introduced a middleware layer composed of a set of tools. Some of them allow users and applications to define, collect, share and search for the context data they are interested in, while others are dedicated to the design, calculation and delivery of inferred context. To validate our approach, an implementation of the suggested framework manages context data provided by three systems: two Moodle servers (one running at the Paul Sabatier University of Toulouse, and the other one hosting the CONTINT project funded by the French National Research Agency) and a local instantiation of the Ariadne Finder. Based on the collected context, relevant indicators have been calculated for each one of these environments. Furthermore, two applications which reuse the encapsulated context have been developed on top of the framework: a personalized system for recommending learning objects to students, and a visualization application which uses multi-touch technologies to facilitate the navigation among collected context entities

    Recommender Systems for Online and Mobile Social Networks: A survey

    Full text link
    Recommender Systems (RS) currently represent a fundamental tool in online services, especially with the advent of Online Social Networks (OSN). In this case, users generate huge amounts of contents and they can be quickly overloaded by useless information. At the same time, social media represent an important source of information to characterize contents and users' interests. RS can exploit this information to further personalize suggestions and improve the recommendation process. In this paper we present a survey of Recommender Systems designed and implemented for Online and Mobile Social Networks, highlighting how the use of social context information improves the recommendation task, and how standard algorithms must be enhanced and optimized to run in a fully distributed environment, as opportunistic networks. We describe advantages and drawbacks of these systems in terms of algorithms, target domains, evaluation metrics and performance evaluations. Eventually, we present some open research challenges in this area

    DRIVER Technology Watch Report

    Get PDF
    This report is part of the Discovery Workpackage (WP4) and is the third report out of four deliverables. The objective of this report is to give an overview of the latest technical developments in the world of digital repositories, digital libraries and beyond, in order to serve as theoretical and practical input for the technical DRIVER developments, especially those focused on enhanced publications. This report consists of two main parts, one part focuses on interoperability standards for enhanced publications, the other part consists of three subchapters, which give a landscape picture of current and surfacing technologies and communities crucial to DRIVER. These three subchapters contain the GRID, CRIS and LTP communities and technologies. Every chapter contains a theoretical explanation, followed by case studies and the outcomes and opportunities for DRIVER in this field

    Predictive statistical user models under the collaborative approach

    Get PDF
    Mención Internacional en el título de doctorUser models and recommender systems due to their similarity can be considered the same thing except from the use that we make of them. Both have their root in multiple disciplines such as information retrieval or machine learning among others. The impact has grown rapidly with the importance of data on systems and applications. Most of the big companies employ one of the other for different reasons such as: gathering more customers, boost sales or increase revenue. Thus very well-known companies like Amazon, EBay or Google use models to improve their businesses. In fact, as data becomes more and more important for companies, universities and people, user models are crucial to make decisions over large amounts of data. Although user models can provide accurate predictions on large populations their use and application is not restricted to predictions but can be extended to selection of dialogue strategies or detection of communities within complex domains. After a deep review of the existing literature, it was found that there is a lack of statistical user models based on experience plus the existing models in the area are content-based models that suffer from major problems as scalability, cold-start or new user problem. Furthermore, researchers in the area of user modelling usually develop their own models and then perform ad-hoc evaluations that are not replicable and therefore not comparable. The lack of a complete framework for evaluation makes very difficult to compare results across models and domains. There are two main approaches to build a user model or recommender system: the content based approach, where predictions are based on the same user past behaviours; and the collaborative approach where predictions rely on like-minded people. Both approaches have advantages but also downsides that have to be considered before building a model. The main goal of this thesis is to develop a hybrid user model that takes the strengths of both approaches and mitigates the downsides by combining both methods. The proposed hybrid model is based on an R-Tree structure. The selection of this structure to support the models is backed from the fact that the rectangle tree is specifically designed to effectively store and manipulate multidimensional data. This data structure introduced by Guttman in 1984 is a height balanced tree that only requires visiting a few nodes to perform a tree search. As a result, it can manage large populations of data efficiently as only a few nodes are visited during the inference. R-Tree has two different typologies of nodes: the leaf-node and the non-leaf node. Leaf nodes contain the whole universe of users while non leaf nodes are somehow redundant and contain summaries of child nodes. Along this thesis two statistical user models based on experience have been proposed. The first one is a knowledge base user mode (KLUM), is a classical approach that summarizes and remove data in order to keep performance level within reasonable margins. The second one, an R-Tree user model (RTUM), is an innovative model based on an R-Tree structure. This new model not only solves the problem of removing data but also the scalability problem which turns out to be one of the major problems in the area of user modelling. Both models have been developed and tested with equivalent formulations to make comparisons relevant. Both models are prepared to create their own knowledge base from scratch but also they can be fed with expert knowledge. Thus alleviating another major problem in the area of user modelling as it is the start-up problem. Regarding the proposal of this thesis, two statistical user models are proposed (KLUM and RTUM). In addition, a refinement of RTUM user model is proposed, while RTUM performs node partitions based on the centroids of the users in that node, the new refinement implements a new partition based on privileged features. Hence, the new approach takes advantage of most discriminatory features of the domain to perform the partition. This new approach not only provides accurate inferences, but also an excellent clustering that can be useful in many different scenarios. For instance, this clustering can be employed in the area of social networks to detect communities within the social network. This is a tough task that has been one of the goals of many researchers during the last few years. This thesis also provides a complete evaluation of the models with a great diversity of parameterizations and domains. The models are tested in four different domains and as a result of the evaluation, it is proved that RTUM user model provides a massive gain against classical user models as KLUM. During the evaluation, RTUM reached success rates of 85% while the analogous KLUM could only reach a 65% thus leaving a 20% gain for the proposed model. The evaluation provided not only compares models and success rates, but also provides a broad analysis of how every parameter of the models impact the performance plus a complete study of the databases sizes and inference times for the models. The main conclusion to the evaluation is that after a complete evaluation with a wide diversity of parameters and domains RTUM outperforms KLUM on every scenario tested. As previously mentioned, after the literature review it was also found a lack of evaluation frameworks for user modelling. This thesis also provides a complete evaluation framework for user modelling. This fills a gap in the literature as well as makes the evaluation replicable and therefore comparable. Along years researchers and developers had found difficulties to compare evaluations and measure the quality of their models in different domains due to the lack of an evaluation standard. The evaluation framework presented in this thesis covers data samples including training set and test set plus different sets of experiments alongside with a statistical analysis of the domain, confidence intervals and confidence levels to guarantee that each experiment is statistically significant. The evaluation framework can be downloaded and then used to complete evaluations and cross-validate results across different models.This thesis would not have been possible without the financial support of the following research projects Cadooh (TSI-020302-2011-21), Thuban (TIN2008-02711) that funded part of this research.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Antonio de Amescua Seco.- Secretario: Ruth Cobos Pérez.- Vocal: Dominikus Heckman

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey

    Get PDF
    The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.Laboratorio de Investigación y Formación en Informática Avanzad

    Analyzing repetitiveness in big code to support software maintenance and evolution

    Get PDF
    Software systems inevitably contain a large amount of repeated artifacts at different level of abstraction---from ideas, requirements, designs, algorithms to implementation. This dissertation focuses on analyzing software repetitiveness at implementation code level and leveraging the derived knowledge for easing tasks in software maintenance and evolution such as program comprehension, API use, change understanding, API adaptation and bug fixing. The guiding philosophy of this work is that, in a large corpus, code that conforms to specifications appears more frequently than code that does not, and similar code is changed similarly and similar code could have similar bugs that can be fixed similarly. We have developed different representations for software artifacts at source code level, and the corresponding algorithms for measuring code similarity and mining repeated code. Our mining techniques bases on the key insight that code that conforms to programming patterns and specifications appears more frequently than code that does not. Thus, correct patterns and specifications can be mined from large code corpus. We also have built program differencing techniques for analyzing changes in software evolution. Our key insight is that similar code is likely changed in similar ways and similar code likely has similar bug(s) which can be fixed similarly. Therefore, learning changes and fixes from the past can help automatically detect and suggest changes/fixes to the repeated code in software development. Our empirical evaluation shows that our techniques can accurately and efficiently detect repeated code, mine useful programming patterns and API specifications, and recommend changes. It can also detect bugs and suggest fixes, and provide actionable insights to ease maintenance tasks. Specifically, our code clone detection tool detects more meaningful clones than other tools. Our mining tools recover high quality programming patterns and API preconditions. The mined results have been used to successfully detect many bugs violating patterns and specifications in mature open-source systems. The mined API preconditions are shown to help API specification writer identify missing preconditions in already-specified APIs and start building preconditions for the not-yet-specified ones. The tools are scalable which analyze large systems in reasonable times. Our study on repeated changes give useful insights for program auto-repair tools. Our automated change suggestion approach achieves top-1 accuracy of 45%-51% which relatively improves more than 200% over the base approach. For a special type of change suggestion, API adaptation, our tool is highly correct and useful
    corecore