303,525 research outputs found

    Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation

    Full text link
    Multi-modal recommendation systems, which integrate diverse types of information, have gained widespread attention in recent years. However, compared to traditional collaborative filtering-based multi-modal recommendation systems, research on multi-modal sequential recommendation is still in its nascent stages. Unlike traditional sequential recommendation models that solely rely on item identifier (ID) information and focus on network structure design, multi-modal recommendation models need to emphasize item representation learning and the fusion of heterogeneous data sources. This paper investigates the impact of item representation learning on downstream recommendation tasks and examines the disparities in information fusion at different stages. Empirical experiments are conducted to demonstrate the need to design a framework suitable for collaborative learning and fusion of diverse information. Based on this, we propose a new model-agnostic framework for multi-modal sequential recommendation tasks, called Online Distillation-enhanced Multi-modal Transformer (ODMT), to enhance feature interaction and mutual learning among multi-source input (ID, text, and image), while avoiding conflicts among different features during training, thereby improving recommendation accuracy. To be specific, we first introduce an ID-aware Multi-modal Transformer module in the item representation learning stage to facilitate information interaction among different features. Secondly, we employ an online distillation training strategy in the prediction optimization stage to make multi-source data learn from each other and improve prediction robustness. Experimental results on a video content recommendation dataset and three e-commerce recommendation datasets demonstrate the effectiveness of the proposed two modules, which is approximately 10% improvement in performance compared to baseline models.Comment: 11 pages, 7 figure

    Enhancing Word Representation Learning with Linguistic Knowledge

    Get PDF
    Representation learning, the process whereby representations are modelled from data, has recently become a central part of Natural Language Processing (NLP). Among the most widely used learned representations are word embeddings trained on large corpora of unannotated text, where the learned embeddings are treated as general representations that can be used across multiple NLP tasks. Despite their empirical successes, word embeddings learned entirely from data can only capture patterns of language usage from the particular linguistic domain of the training data. Linguistic knowledge, which does not vary among linguistic domains, can potentially be used to address this limitation. The vast sources of linguistic knowledge that are readily available nowadays can help train more general word embeddings (i.e. less affected by distance between linguistic domains) by providing them with such information as semantic relations, syntactic structure, word morphology, etc. In this research, I investigate the different ways in which word embedding models capture and encode words’ semantic and contextual information. To this end, I propose two approaches to integrate linguistic knowledge into the statistical learning of word embeddings. The first approach is based on augmenting the training data for a well-known Skip-gram word embedding model, where synonym information is extracted from a lexical knowledge base and incorporated into the training data in the form of additional training examples. This data augmentation approach seeks to enforce synonym relations in the learned embeddings. The second approach exploits structural information in text by transforming every sentence in the data into its corresponding dependency parse trees and training an autoencoder to recover the original sentence. While learning a mapping from a dependency parse tree to its originating sentence, this novel Structure-to-Sequence (Struct2Seq) model produces word embeddings that contain information about a word’s structural context. Given that the combination of knowledge and statistical methods can often be unpredictable, a central focus of this thesis is on understanding the effects of incorporating linguistic knowledge into word representation learning. Through the use of intrinsic (geometric characteristics) and extrinsic (performance on downstream tasks) evaluation metrics, I aim to measure the specific influence that the injected knowledge can have on different aspects of the informational composition of word embeddings

    Teachers Know Best: Making Data Work For Teachers and Students

    Get PDF
    The Teachers Know Best research project seeks to encourage innovation in K - 12 education by helping product developers and those who procure resources for teachers better understand teachers' views. The intent of Making Data Work is to drill down to help educators, school leaders, and product developers better understand the challenges teachers face when working with this critical segment of digital instructional tools. More than 4,600 teachers from a nationally representative sample were surveyed about their use of data to drive instruction and the use of these tools.This study focuses on the potential of a specific subset of digital instructional tools: those that help teachers collect and make use of student data to tailor and improve instruction for individual students. The use of data is a crucial component in personalized learning, which ensures that student learning experiences -- what they learn and how, when, and where they learn it -- are tailored to their individual needs, skills, and interests and enable them to take ownership of their learning. Personalized learning is critical to meeting all students where they are, so they are neither bored with assignments that are too easy nor overwhelmed by work that is too hard

    An information literacy integration model and its application in higher education

    Get PDF
    Purpose - The purpose of this paper is to present a model for curricular integration of information literacy for undergraduate programs in higher education. Design/methodology/approach - Data are drawn from individual interviews at three universities in Australia and curricular integration working experience at a New Zealand university. Sociocultural theories are adopted in the research process and in the development of the model, Findings - Key characteristics of the curriculum integration of information literacy were identified and an information literacy integration model was developed. The S2J2 key behaviours for campus-wide multi-partner collaboration in information literacy integration were also identified. Research limitations/implications - The model was developed without including the employer needs. Through the process of further research, the point of view of the employer on how to provide information literacy education needs to be explored in order to strengthen the model in curricular design. Practical implications - The information literacy integration model was developed based on practical experience in higher education and has been applied in different undergraduate curricular programs. The model could be used or adapted by both librarians and academics when they integrate information literacy into an undergraduate curriculum from a lower level to a higher level. Originality/value - The information literacy integration model was developed based on recent PhD research. The model integrates curriculum, pedagogy and learning theories, information literacy theories, information literacy guidelines, people and collaborative together. The model provides a framework of how information literacy can be integrated into multiple courses across an undergraduate academic degree in higher education

    Student-Centered Learning: Functional Requirements for Integrated Systems to Optimize Learning

    Get PDF
    The realities of the 21st-century learner require that schools and educators fundamentally change their practice. "Educators must produce college- and career-ready graduates that reflect the future these students will face. And, they must facilitate learning through means that align with the defining attributes of this generation of learners."Today, we know more than ever about how students learn, acknowledging that the process isn't the same for every student and doesn't remain the same for each individual, depending upon maturation and the content being learned. We know that students want to progress at a pace that allows them to master new concepts and skills, to access a variety of resources, to receive timely feedback on their progress, to demonstrate their knowledge in multiple ways and to get direction, support and feedback from—as well as collaborate with—experts, teachers, tutors and other students.The result is a growing demand for student-centered, transformative digital learning using competency education as an underpinning.iNACOL released this paper to illustrate the technical requirements and functionalities that learning management systems need to shift toward student-centered instructional models. This comprehensive framework will help districts and schools determine what systems to use and integrate as they being their journey toward student-centered learning, as well as how systems integration aligns with their organizational vision, educational goals and strategic plans.Educators can use this report to optimize student learning and promote innovation in their own student-centered learning environments. The report will help school leaders understand the complex technologies needed to optimize personalized learning and how to use data and analytics to improve practices, and can assist technology leaders in re-engineering systems to support the key nuances of student-centered learning
    • …
    corecore