18,413 research outputs found

    Creating, Doing, and Sustaining OER: Lessons from Six Open Educational Resource Projects

    Get PDF
    The development of free-to-use open educational resources (OER) has generated a dynamic field of widespread interest and study regarding methods for creating and sustaining OER. To help foster a thriving OER movement with potential for knowledge-sharing across program, organizational and national boundaries, the Institute for Knowledge Management in Education (ISKME), developed and conducted case study research programs in collaboration with six OER projects from around the world. Embodying a range of challenges and opportunities among a diverse set of OER projects, the case studies intended to track, analyze and share key developments in the creation, use and reuse of OER. The specific cases include: CurriculumNet, Curriki, Free High School Science Texts (FHSST), Training Commons, Stanford Encyclopedia of Philosophy (SEP), and Teachers' Domain

    Collaborative Creation of Teaching-Learning Sequences and an Atlas of Knowledge

    Get PDF
    The article is about a new online resource, a collaborative portal for teachers, which publishes a network of prerequisites for teaching/learning any concept or an activity. A simple and effective method of collaboratively constructing teaching­-learning sequences is presented. The special emergent properties of the dependency network and their didactic and epistemic implications are pointed. The article ends with an appeal to the global teaching community to contribute prerequisites of any subject to complete the global roadmap for an altas being built on similar lines as Wikipedia. The portal is launched and waiting for community participation at http://www.gnowledge.org.\u

    Web 2.0 @ BU – Use of Wikis within the School of Health & Social Care

    Get PDF
    The aim of the Web 2.0 @ BU project is to investigate current good practice and to map the use of Web 2.0 technologies within Bournemouth University. This paper aims to communicate the findings from the School of Health & Social Care project team during the academic year 2007/2008 concerning the use of wikis in three distinct areas: Reviewing The Literature Wiki - A teaching session on reviewing the literature is included as a part of the Masters Research Unit - Principles of Enquiry Unit 1. This case study concerns using a wiki as a replacement for PowerPoint and as a separate study guide. LIMBIC Project Wiki - The aim of the LIMBIC project is to evaluate an inter-professional approach linking practice based learning with the principles and methods of healthcare improvement. This case study examines how an external project group wiki could be utilised to enable collaboration between non-technical healthcare users. Teamworking and Communication in Health and Social Care Unit Wiki - The purpose of this third year unit is to provide students with the opportunity to undertake interprofessional project work and, through this develop their skills of working collaboratively in teams and to communicate and function more effectively within their role. This case study looks at how effective small student group wikis are as a part of a long, thin unit where students sometimes find that they vary their contribution according to the time that they have. The paper hopes to share knowledge and experience of utilising wikis, enabling teachers and practitioners to be in a stronger position to respond and react to the changing demands of using innovative new learning technologies

    Collaboratively Patching Linked Data

    Full text link
    Today's Web of Data is noisy. Linked Data often needs extensive preprocessing to enable efficient use of heterogeneous resources. While consistent and valid data provides the key to efficient data processing and aggregation we are facing two main challenges: (1st) Identification of erroneous facts and tracking their origins in dynamically connected datasets is a difficult task, and (2nd) efforts in the curation of deficient facts in Linked Data are exchanged rather rarely. Since erroneous data often is duplicated and (re-)distributed by mashup applications it is not only the responsibility of a few original publishers to keep their data tidy, but progresses to be a mission for all distributers and consumers of Linked Data too. We present a new approach to expose and to reuse patches on erroneous data to enhance and to add quality information to the Web of Data. The feasibility of our approach is demonstrated by example of a collaborative game that patches statements in DBpedia data and provides notifications for relevant changes.Comment: 2nd International Workshop on Usage Analysis and the Web of Data (USEWOD2012) in the 21st International World Wide Web Conference (WWW2012), Lyon, France, April 17th, 201

    Extracting corpus specific knowledge bases from Wikipedia

    Get PDF
    Thesauri are useful knowledge structures for assisting information retrieval. Yet their production is labor-intensive, and few domains have comprehensive thesauri that cover domain-specific concepts and contemporary usage. One approach, which has been attempted without much success for decades, is to seek statistical natural language processing algorithms that work on free text. Instead, we propose to replace costly professional indexers with thousands of dedicated amateur volunteers--namely, those that are producing Wikipedia. This vast, open encyclopedia represents a rich tapestry of topics and semantics and a huge investment of human effort and judgment. We show how this can be directly exploited to provide WikiSauri: manually-defined yet inexpensive thesaurus structures that are specifically tailored to expose the topics, terminology and semantics of individual document collections. We also offer concrete evidence of the effectiveness of WikiSauri for assisting information retrieval

    Automatic detection of accommodation steps as an indicator of knowledge maturing

    Get PDF
    Jointly working on shared digital artifacts – such as wikis – is a well-tried method of developing knowledge collectively within a group or organization. Our assumption is that such knowledge maturing is an accommodation process that can be measured by taking the writing process itself into account. This paper describes the development of a tool that detects accommodation automatically with the help of machine learning algorithms. We applied a software framework for task detection to the automatic identification of accommodation processes within a wiki. To set up the learning algorithms and test its performance, we conducted an empirical study, in which participants had to contribute to a wiki and, at the same time, identify their own tasks. Two domain experts evaluated the participants’ micro-tasks with regard to accommodation. We then applied an ontology-based task detection approach that identified accommodation with a rate of 79.12%. The potential use of our tool for measuring knowledge maturing online is discussed
    • 

    corecore