1,743 research outputs found

    A Hybrid Recommender Strategy on an Expanded Content Manager in Formal Learning

    Get PDF
    The main topic of this paper is to find ways to improve learning in a formal Higher Education Area. In this environment, the teacher publishes or suggests contents that support learners in a given course, as supplement of classroom training. Generally, these materials are pre-stored and not changeable. These contents are typically published in learning management systems (the Moodle platform emerges as one of the main choices) or in sites created and maintained on the web by teachers themselves. These scenarios typically include a specific group of students (class) and a given period of time (semester or school year). Contents reutilization often needs replication and its update requires new edition and new submission by teachers. Normally, these systems do not allow learners to add new materials, or to edit existing ones. The paper presents our motivations, and some related concepts and works. We describe the concepts of sequencing and navigation in adaptive learning systems, followed by a short presentation of some of these systems. We then discuss the effects of social interaction on the learners’ choices. Finally, we refer some more related recommender systems and their applicability in supporting learning. One central idea from our proposal is that we believe that students with the same goals and with similar formal study time can benefit from contents' assessments made by learners that already have completed the same courses and have studied the same contents. We present a model for personalized recommendation of learning activities to learners in a formal learning context that considers two systems. In the extended content management system, learners can add new materials, select materials from teachers and from other learners, evaluate and define the time spent studying them. Based on learner profiles and a hybrid recommendation strategy, combining conditional and collaborative filtering, our second system will predict learning activities scores and offers adaptive and suitable sequencing learning contents to learners. We propose that similarities between learners can be based on their evaluation interests and their recent learning history. The recommender support subsystem aims to assist learners at each step suggesting one suitable ordered list of LOs, by decreasing order of relevance. The proposed model has been implemented in the Moodle Learning Management System (LMS), and we present the system’s architecture and design. We will evaluate it in a real higher education formal course and we intend to present experimental results in the near future

    LEVERAGING TEXT MINING FOR THE DESIGN OF A LEGAL KNOWLEDGE MANAGEMENT SYSTEM

    Get PDF
    In today’s globalized world, companies are faced with numerous and continuously changing legal requirements. To ensure that these companies are compliant with legal regulations, law and consulting firms use open legal data published by governments worldwide. With this data pool growing rapidly, the complexity of legal research is strongly increasing. Despite this fact, only few research papers consider the application of information systems in the legal domain. Against this backdrop, we pro-pose a knowledge management (KM) system that aims at supporting legal research processes. To this end, we leverage the potentials of text mining techniques to extract valuable information from legal documents. This information is stored in a graph database, which enables us to capture the relation-ships between these documents and users of the system. These relationships and the information from the documents are then fed into a recommendation system which aims at facilitating knowledge transfer within companies. The prototypical implementation of the proposed KM system is based on 20,000 legal documents and is currently evaluated in cooperation with a Big 4 accounting company

    Cumulative growth in user-generated content production : evidence from Wikipedia

    Get PDF
    Open content production platforms typically allow users to gradually create content and react to previous contributions. Using detailed edit-level data across a large number of Wikipedia articles, we investigate how past edits shape current editing activity. We find that cumulative past contributions, embodied by the current article length, lead to signifi- cantly more editing activity, while controlling for a host of factors such as popularity of the topic and platform-level growth trends. The magnitude of the effect is large; content growth over an eight-year period would have been 45% lower in its absence. Our findings suggest other open content production environments are likely to also benefit from similar cumulative growth effects. In the presence of such effects, managerial interventions that increase content are amplified because they trigger further contributions

    A Framework for Personalized Content Recommendations to Support Informal Learning in Massively Diverse Information WIKIS

    Get PDF
    Personalization has proved to achieve better learning outcomes by adapting to specific learners’ needs, interests, and/or preferences. Traditionally, most personalized learning software systems focused on formal learning. However, learning personalization is not only desirable for formal learning, it is also required for informal learning, which is self-directed, does not follow a specified curriculum, and does not lead to formal qualifications. Wikis among other informal learning platforms are found to attract an increasing attention for informal learning, especially Wikipedia. The nature of wikis enables learners to freely navigate the learning environment and independently construct knowledge without being forced to follow a predefined learning path in accordance with the constructivist learning theory. Nevertheless, navigation on information wikis suffer from several limitations. To support informal learning on Wikipedia and similar environments, it is important to provide easy and fast access to relevant content. Recommendation systems (RSs) have long been used to effectively provide useful recommendations in different technology enhanced learning (TEL) contexts. However, the massive diversity of unstructured content as well as user base on such information oriented websites poses major challenges when designing recommendation models for similar environments. In addition to these challenges, evaluation of TEL recommender systems for informal learning is rather a challenging activity due to the inherent difficulty in measuring the impact of recommendations on informal learning with the absence of formal assessment and commonly used learning analytics. In this research, a personalized content recommendation framework (PCRF) for information wikis as well as an evaluation framework that can be used to evaluate the impact of personalized content recommendations on informal learning from wikis are proposed. The presented recommendation framework models learners’ interests by continuously extrapolating topical navigation graphs from learners’ free navigation and applying graph structural analysis algorithms to extract interesting topics for individual users. Then, it integrates learners’ interest models with fuzzy thesauri for personalized content recommendations. Our evaluation approach encompasses two main activities. First, the impact of personalized recommendations on informal learning is evaluated by assessing conceptual knowledge in users’ feedback. Second, web analytics data is analyzed to get an insight into users’ progress and focus throughout the test session. Our evaluation revealed that PCRF generates highly relevant recommendations that are adaptive to changes in user’s interest using the HARD model with rank-based mean average precision (MAP@k) scores ranging between 100% and 86.4%. In addition, evaluation of informal learning revealed that users who used Wikipedia with personalized support could achieve higher scores on conceptual knowledge assessment with average score of 14.9 compared to 10.0 for the students who used the encyclopedia without any recommendations. The analysis of web analytics data show that users who used Wikipedia with personalized recommendations visited larger number of relevant pages compared to the control group, 644 vs 226 respectively. In addition, they were also able to make use of a larger number of concepts and were able to make comparisons and state relations between concepts

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    Realising context-oriented information filtering.

    Get PDF
    The notion of information overload is an increasing factor in modern information service environments where information is ‘pushed’ to the user. As increasing volumes of information are presented to computing users in the form of email, web sites, instant messaging and news feeds, there is a growing need to filter and prioritise the importance of this information. ‘Information management’ needs to be undertaken in a manner that not only prioritises what information we do need, but to also dispose of information that is sent, which is of no (or little) use to us.The development of a model to aid information filtering in a context-aware way is developed as an objective for this thesis. A key concern in the conceptualisation of a single concept is understanding the context under which that concept exists (or can exist). An example of a concept is a concrete object, for instance a book. This contextual understanding should provide us with clear conceptual identification of a concept including implicit situational information and detail of surrounding concepts.Existing solutions to filtering information suffer from their own unique flaws: textbased filtering suffers from problems of inaccuracy; ontology-based solutions suffer from scalability challenges; taxonomies suffer from problems with collaboration. A major objective of this thesis is to explore the use of an evolving community maintained knowledge-base (that of Wikipedia) in order to populate the context model from prioritise concepts that are semantically relevant to the user’s interest space. Wikipedia can be classified as a weak knowledge-base due to its simple TBox schema and implicit predicates, therefore, part of this objective is to validate the claim that a weak knowledge-base is fit for this purpose. The proposed and developed solution, therefore, provides the benefits of high recall filtering with low fallout and a dependancy on a scalable and collaborative knowledge-base.A simple web feed aggregator has been built using the Java programming language that we call DAVe’s Rss Organisation System (DAVROS-2) as a testbed environment to demonstrate specific tests used within this investigation. The motivation behind the experiments is to demonstrate that the combination of the concept framework instantiated through Wikipedia can provide a framework to aid in concept comparison, and therefore be used in news filtering scenario as an example of information overload. In order to evaluate the effectiveness of the method well understood measures of information retrieval are used. This thesis demonstrates that the utilisation of the developed contextual concept expansion framework (instantiated using Wikipedia) improved the quality of concept filtering over a baseline based on string matching. This has been demonstrated through the analysis of recall and fallout measures

    SWKM 2008: Social Web and Knowledge Management, Proceedings:CEUR Workshop Proceedings

    Get PDF
    corecore