267,730 research outputs found

    INSPIRAL: investigating portals for information resources and learning. Final project report

    Get PDF
    INSPIRAL's aims were to identify and analyse, from the perspective of the UK HE learner, the nontechnical, institutional and end-user issues with regard to linking VLEs and digital libraries, and to make recommendations for JISC strategic planning and investment. INSPIRAL's objectives -To identify key stakeholders with regard to the linkage of VLEs, MLEs and digital libraries -To identify key stakeholder forum points and dissemination routes -To identify the relevant issues, according to the stakeholders and to previous research, pertaining to the interaction (both possible and potential) between VLEs/MLEs and digital libraries -To critically analyse identified issues, based on stakeholder experience and practice; output of previous and current projects; and prior and current research -To report back to JISC and to the stakeholder communities, with results situated firmly within the context of JISC's strategic aims and objectives

    Whole-Chain Recommendations

    Full text link
    With the recent prevalence of Reinforcement Learning (RL), there have been tremendous interests in developing RL-based recommender systems. In practical recommendation sessions, users will sequentially access multiple scenarios, such as the entrance pages and the item detail pages, and each scenario has its specific characteristics. However, the majority of existing RL-based recommender systems focus on optimizing one strategy for all scenarios or separately optimizing each strategy, which could lead to sub-optimal overall performance. In this paper, we study the recommendation problem with multiple (consecutive) scenarios, i.e., whole-chain recommendations. We propose a multi-agent RL-based approach (DeepChain), which can capture the sequential correlation among different scenarios and jointly optimize multiple recommendation strategies. To be specific, all recommender agents (RAs) share the same memory of users' historical behaviors, and they work collaboratively to maximize the overall reward of a session. Note that optimizing multiple recommendation strategies jointly faces two challenges in the existing model-free RL model - (i) it requires huge amounts of user behavior data, and (ii) the distribution of reward (users' feedback) are extremely unbalanced. In this paper, we introduce model-based RL techniques to reduce the training data requirement and execute more accurate strategy updates. The experimental results based on a real e-commerce platform demonstrate the effectiveness of the proposed framework.Comment: 29th ACM International Conference on Information and Knowledge Managemen
    • …
    corecore