8 research outputs found

    Towards a Living Lab to support evidence-based educational research and innovation

    Get PDF
    Living Labs represent a promising approach to bridge the gap between evidence-based educational research and sustained innovation. This position paper presents our initial work related to educational Living Labs. It describes a model of the research and innovation processes that we aim to support. It also presents the preliminary results of a pilot study in which a Living Lab supported a researcher and two teachers to introduce Learning Analytics in their classroom

    Improving the expressiveness of black-box models for predicting student performance

    Get PDF
    Early prediction systems of student performance can be very useful to guide student learning. For a prediction model to be really useful as an effective aid for learning, it must provide tools to adequately interpret progress, to detect trends and behaviour patterns and to identify the causes of learning problems. White-box and black-box techniques have been described in literature to implement prediction models. White-box techniques require a priori models to explore, which make them easy to interpret but difficult to be generalized and unable to detect unexpected relationships between data. Black-box techniques are easier to generalize and suitable to discover unsuspected relationships but they are cryptic and difficult to be interpreted for most teachers. In this paper a black-box technique is proposed to take advantage of the power and versatility of these methods, while making some decisions about the input data and design of the classifier that provide a rich output data set. A set of graphical tools is also proposed to exploit the output information and provide a meaningful guide to teachers and students. From our experience, a set of tips about how to design a prediction system and the representation of the output information is also provided

    Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey

    Get PDF
    The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.Laboratorio de Investigación y Formación en Informática Avanzad

    Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey

    Get PDF
    The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.Laboratorio de Investigación y Formación en Informática Avanzad

    Which User Interactions Predict Levels of Expertise in Work-integrated Learning?

    No full text
    Abstract. Predicting knowledge levels from user's implicit interactions with an adaptive system is a difficult task, particularly in learning systems that are used in the context of daily work tasks. We have collected interactions of six persons working with the adaptive work-integrated learning system APOSDLE over a period of two months to find out whether naturally occurring interactions with the system can be used to predict their level of expertise. One set of interactions is based on the tasks they performed, the other on a number of additional Knowledge Indicating Events (KIE). We find that the addition of KIE significantly improves the prediction as compared to using tasks only. Both approaches are superior to a model that uses only the frequencies of events

    Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey

    Get PDF
    The increasing number of publications on recommender systems for Technology Enhanced Learning (TEL) evidence a growing interest in their development and deployment. In order to support learning, recommender systems for TEL need to consider specific requirements, which differ from the requirements for recommender systems in other domains like e-commerce. Consequently, these particular requirements motivate the incorporation of specific goals and methods in the evaluation process for TEL recommender systems. In this article, the diverse evaluation methods that have been applied to evaluate TEL recommender systems are investigated. A total of 235 articles are selected from major conferences, workshops, journals, and books where relevant work have been published between 2000 and 2014. These articles are quantitatively analysed and classified according to the following criteria: type of evaluation methodology, subject of evaluation, and effects measured by the evaluation. Results from the survey suggest that there is a growing awareness in the research community of the necessity for more elaborate evaluations. At the same time, there is still substantial potential for further improvements. This survey highlights trends and discusses strengths and shortcomings of the evaluation of TEL recommender systems thus far, thereby aiming to stimulate researchers to contemplate novel evaluation approaches.Laboratorio de Investigación y Formación en Informática Avanzad

    Personalized Approaches to Supporting the Learning Needs of Lifelong Professional Learners

    Get PDF
    Advanced learning technology research has begun to take on a complex challenge: supporting lifelong learning. Professional learning is an essential subset of lifelong learning that is more tractable than the full lifelong learning challenge. Professionals do not always have access to professional teachers to provide input to the problems they encounter, so they rely on their peers in an online learning community (OLC) to help meet their learning needs. Supporting professional learners within an OLC is a difficult problem as the learning needs of each learner continuously evolve, often in different ways from other learners. Hence, there is a need to provide personalized support to learners adapted to their individual learning needs. This thesis explores personalized approaches for detecting the unperceived learning needs and meeting the expressed learning needs of learners in an OLC. The experimental test bed for this research is Stack Overflow (SO), an OLC used by software professionals. To date, seven experiments have been carried out mining SO peer-peer interaction data. Knowing that question-answerers play a huge role in meeting the learning needs of the question-askers, the first experiment aimed to detect the learning needs of the answerers. Results from experiment 1 show that reputable answerers themselves demonstrate unperceived learning needs as revealed by a decline in quality answers in SO. Of course, a decline in quality answers could impact the help-seeking experience of question-askers; hence experiment 2 sought to understand the effects of the help-seeking experience of question-askers on their enthusiasm to continuously participate within the OLC. As expected, negative help-seeking experiences of question-askers had a large impact on their propensity to seek further help within the OLC. To improve the help-seeking experience of question-askers, it is important to proactively detect the learning needs of the question-answerers before they provide poor quality answers. Thus, in experiment 3 the goal was to predict whether a question-answerer would give a poor answer to a question based on their past peer-peer interactions. Under various assumptions, accuracies ranging from 84.57% to 94.54% were achieved. Next, experiment 4 attempted to detect the unperceived learning needs of question-askers even before they are aware of such needs. Using information about a learner’s interactions over a 5-month period, a prediction was made as to what they would be asking about during the next month, achieving recall and precision values of 0.93 and 0.81. Knowing the learning needs of question-askers early creates an opportunity to predict prospective answerers who could provide timely and quality answers to their question. The goal of experiment 5 was thus to predict the actual answerers for questions based only on information known at the time the question was asked. The iv success rate was at best 63.15%, which would only be marginally useful to inform a real-life peer recommender system. Thus, experiment 6 explored new measures in predicting the answerers, boosting the success rate to 89.64%. Of course, a peer recommender system would be deemed to be especially useful if it can provide prompt interventions, especially to get answers to questions that would otherwise not be answered quickly. To this end, experiment 7 attempted to predict the question-askers whose questions would be answered late or even remain unanswered, and a success rate of 68.4% was achieved. Results from these experiments suggest that modelling the activities of learners in an OLC is key in providing support to them to meet their learning needs. Perhaps, the most important lesson learned in this research is that lightweight approaches can be developed to help meet the evolving learning needs of professionals, even as knowledge changes within a profession. Metrics based on the experiments above are exactly such lightweight methodologies and could be the basis for useful tools to support professional learners

    Personalized Recommender Systems for Resource-based Learning - Hybrid Graph-based Recommender Systems for Folksonomies

    Get PDF
    As the Web increasingly pervades our everyday lives, we are faced with an overload of information. We often learn on-the-job without a teacher and without didactically prepared learning resources. We not only learn on our own but also collaboratively on social platforms where we discuss issues, exchange information and share knowledge with others. We actively learn with resources we find on the Web such as videos, blogs, forums or wikis. This form of self-regulated learning is called resource-based learning. An ongoing challenge in technology enhanced learning (TEL) and in particular in resource-based learning, is supporting learners in finding learning resources relevant to their current needs and learning goals. In social tagging systems, users collaboratively attach keywords called tags to resources thereby forming a network-like structure called a folksonomy. Additional semantic information gained for example from activity hierarchies or semantic tags, form an extended folksonomy and provide valuable information about the context of the resources the learner has tagged, the related activities the resources could be relevant for, and the learning task the learner is currently working on. This additional semantic information could be exploited by recommender systems to generate personalized recommendations of learning resources. Thus, the first research goal of this thesis is to develop and evaluate personalized recommender algorithms for a resource-based learning scenario. To this end, the resource-based learning application scenario is analysed, taking an existing learning platform as a concrete example, in order to determine which additional semantic information could be exploited for the recommendation of learning resources. Several new hybrid graph-based recommender approaches are implemented and evaluated. Additional semantic information gained from activities, activity hierarchies, semantic tag types, the semantic relatedness between tags and the context-specific information found in a folksonomy are thereby exploited. The proposed recommender algorithms are evaluated in offline experiments on different datasets representing diverse evaluation scenarios. The evaluation results show that incorporating additional semantic information is advantageous for providing relevant recommendations. The second goal of this thesis is to investigate alternative evaluation approaches for recommender algorithms for resource-based learning. Offline experiments are fast to conduct and easy to repeat, however they face the so called incompleteness problem as datasets are limited to the historical interactions of the users. Thus newly recommended resources, in which the user had not shown an interest in the past, cannot be evaluated. The recommendation of novel and diverse learning resources is however a requirement for TEL and needs to be evaluated. User studies complement offline experiments as the users themselves judge the relevance or novelty of the recommendations. But user studies are expensive to conduct and it is often difficult to recruit a large number of participants. Therefore a gap exists between the fast, easy to repeat offline experiments and the more expensive user studies. Crowdsourcing is an alternative as it offers the advantages of offline experiments, whilst still retaining the advantages of a user-centric evaluation. In this thesis, a crowdsourcing evaluation approach for recommender algorithms for TEL is proposed and a repeated evaluation of one of the proposed recommender algorithms is conducted as a proof-of-concept. The results of both runs of the experiment show that crowdsourcing can be used as an alternative approach to evaluate graph-based recommender algorithms for TEL
    corecore