2,389 research outputs found

    Simulating Light-Weight Personalised Recommender Systems in Learning Networks: A Case for Pedagogy-Oriented and Rating-Based Hybrid Recommendation Strategies

    Get PDF
    Recommender systems for e-learning demand specific pedagogy-oriented and hybrid recommendation strategies. Current systems are often based on time-consuming, top down information provisioning combined with intensive data-mining collaborative filtering approaches. However, such systems do not seem appropriate for Learning Networks where distributed information can often not be identified beforehand. Providing sound way-finding support for lifelong learners in Learning Networks requires dedicated personalised recommender systems (PRS), that offer the learners customised advise on which learning actions or programs to study next. Such systems should also be practically feasible and be developed with minimized effort. Currently, such so called light-weight PRS systems are scarcely available. This study shows that simulation studies can support the analysis and optimisation of PRS requirements prior to starting the costly process of their development, and practical implementation (including testing and revision) during field experiments in real-life learning situations. This simulation study confirms that providing recommendations leads towards more effective, more satisfied, and faster goal achievement. Furthermore, this study reveals that a light-weight hybrid PRS-system based on ratings is a good alternative for an ontology-based system, in particular for low-level goal achievement. Finally, it is found that rating-based light-weight hybrid PRS-systems enable more effective, more satisfied, and faster goal attainment than peer-based light-weight hybrid PRS-systems (incorporating collaborative techniques without rating).Recommendation Strategy; Simulation Study; Way-Finding; Collaborative Filtering; Rating

    'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions

    Full text link
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.Comment: 14 pages, 3 figures, ACM Conference on Human Factors in Computing Systems (CHI'18), April 21--26, Montreal, Canad

    Web 2.0 technologies for learning: the current landscape – opportunities, challenges and tensions

    Get PDF
    This is the first report from research commissioned by Becta into Web 2.0 technologies for learning at Key Stages 3 and 4. This report describes findings from an additional literature review of the then current landscape concerning learner use of Web 2.0 technologies and the implications for teachers, schools, local authorities and policy makers

    Meaningful XAI Based on User-Centric Design Methodology

    Get PDF

    'It's Reducing a Human Being to a Percentage'; Perceptions of Procedural Justice in Algorithmic Decisions

    Get PDF
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to ‘meaningful information about the logic’ behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles—under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best’ approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions

    Meaningful XAI Based on User-Centric Design Methodology

    Get PDF
    This report explores the concept of explainability in AI-based systems, distinguishing between "local" and "global" explanations. “Local” explanations refer to specific algorithmic outputs in their operational context, while “global” explanations encompass the system as a whole. The need to tailor explanations to users and tasks is emphasised, acknowledging that explanations are not universal solutions and can have unintended consequences. Two use cases illustrate the application of explainability techniques: an educational recommender system, and explainable AI for scientific discoveries. The report discusses the subjective nature of meaningfulness in explanations and proposes cognitive metrics for its evaluation. It concludes by providing recommendations, including the inclusion of “local” explainability guidelines in the EU AI proposal, the adoption of a user-centric design methodology, and the harmonisation of explainable AI requirements across different EU legislation and case law.Overall, this report delves into the framework and use cases surrounding explainability in AI-based systems, emphasising the need for “local” and “global” explanations, and ensuring they are tailored toward users of AI-based systems and their tasks

    Simulating light-weight Personalised Recommender Systems in Learning Networks: A case for Pedagogy-Oriented and Rating-based Hybrid Recommendation Strategies

    Get PDF
    Nadolski, R. J., Van den Berg, B., Berlanga, A. J., Drachsler, H., Hummel, H. G. K., Koper, R., & Sloep, P. B. (2009). Simulating Light-Weight Personalised Recommender Systems in Learning Networks: A Case for Pedagogy-Oriented and Rating-Based Hybrid Recommendation Strategies. Journal of Artificial Societies and Social Simulation 12(1)4 <http://jasss.soc.surrey.ac.uk/12/1/4.html>.Recommender systems for e-learning demand specific pedagogy-oriented and hybrid recommendation strategies. Current systems are often based on time-consuming, top down information provisioning combined with intensive data-mining collaborative filtering approaches. However, such systems do not seem appropriate for Learning Networks where distributed information can often not be identified beforehand. Sound way-finding for lifelong learners in Learning Networks requires dedicated personalised recommender systems (PRS) which should also be practically feasible with minimized effort. Currently, such light-weight PRS systems are scarcely available. This study shows that simulations can support defining PRS requirements prior to starting the costly process of development, implementation, testing, revision, and before conducting field experiments with real learners. This study confirms that providing recommendations leads towards more effective, more satisfied, and faster goal achievement. Furthermore, this simulation study reveals that a rating-based light-weight hybrid PRS-system is a good alternative for ontology-based recommendations, in particular for low-level goal achievement. Finally, it is found that rating-based light-weight hybrid PRS-systems enable more effective, more satisfied, and faster goal attainment than peer-based light-weight hybrid PRS-systems (incorporating collaborative techniques without rating)

    Effect of emotions and personalisation on cancer website reuse intentions

    Get PDF
    The effect of emotions and personalisation on continuance use intentions in online health services is underexplored. Accordingly, we propose a research model for examining the impact of emotion- and personalisation-based factors on cancer website reuse intentions. We conducted a study using a real-world NGO cancer-support website, which was evaluated by 98 participants via an online questionnaire. Model relations were estimated using the PLS-SEM method. Our findings indicated that pre-use emotions did not significantly influence perceived personalisation. However, satisfaction with personalisation, and perceived usefulness mediated by satisfaction, increased reuse intentions. In addition, post-use positive emotions potentially influenced reuse intentions. Our paper, therefore, illustrates the applicability of theory regarding continuance use intentions to cancer-support websites and highlights the importance of personalisation for these purposes
    • 

    corecore