14 research outputs found

    A first exploration of an inductive analysis approach for detecting learning design patterns

    Get PDF
    Please cite as: Francis Brouns, Rob Koper, Jocelyn Manderveld, Jan van Bruggen, Peter Sloep, Peter van Rosmalen, Colin Tattersall and Hubert Vogten (2005). A first exploration of an inductive analysis approach for detecting learning design patterns. Journal of Interactive Media in Education (Advances in Learning Design. Special Issue, eds. Colin Tattersall, Rob Koper), 2005/03. ISSN:1365-893X [http://jime.open.ac.uk/2005/03]One way to develop effective online courses is the use of learning design patterns, since patterns capture successful solutions. Pedagogical patterns are commonly created by human cognitive processing in "writer's workshops". We explore two ideas; first whether IMS Learning Design is suitable for detecting patterns in existing courses and secondly whether the use of inductive analyses is a suitable approach. We expect patterns to occur in the method section of a learning design, because here the process of teaching and learning is defined. We provide some suggestions for inductive techniques that could be applied to existing learning designs in order to detect patterns and discuss how the patterns could be used to create new learning designs. None of the suggested approaches are validated yet, but are intended as input for the ongoing discussion on patterns

    Learning Design Patterns: Exploring an inductive analysis approach

    Get PDF
    Preprint of article submitted to the joint Unfold/Prolearn Workshop, September 2005; to be published in a Special Issue on Learning Design of the IEEE journal Educational Technology & Society.Learning design patterns assist the development of effective courses, because patterns capture successful solutions. Pedagogical patterns are commonly created by human cognitive processing in "writer's workshops". Inductive techniques could be used to detect or determine patterns in existing data, or learning designs. This assumes that the learning designs are available in a format that is machine interpretable. The IMS Learning Design specification enables the formal coding of learning designs. We explain that we expect patterns to occur in the method section of a learning design and in particular in acts. We explore several inductive techniques that could be applied to existing learning designs in order to detect and determine patterns and discuss how these could be applied to create new learning designs

    A first exploration of an inductive analysis approach for detecting learning design patterns

    Get PDF
    Commentary on: Chapter 1: An Introduction to Learning Design. (Koper, 2005) Abstract: One way to develop effective online courses is the use of learning design patterns, since patterns capture successful solutions. Pedagogical patterns are commonly created by human cognitive processing in "writer's workshops". We explore two ideas; first whether IMS Learning Design is suitable for detecting patterns in existing courses and secondly whether the use of inductive analyses is a suitable approach. We expect patterns to occur in the method section of a learning design, because here the process of teaching and learning is defined. We provide some suggestions for inductive techniques that could be applied to existing learning designs in order to detect patterns and discuss how the patterns could be used to create new learning designs. None of the suggested approaches are validated yet, but are intended as input for the ongoing discussion on patterns. Editors: Colin Tattersall and Rob Koper

    End-to-End Entity Resolution for Big Data: A Survey

    Get PDF
    One of the most important tasks for improving data quality and the reliability of data analytics results is Entity Resolution (ER). ER aims to identify different descriptions that refer to the same real-world entity, and remains a challenging problem. While previous works have studied specific aspects of ER (and mostly in traditional settings), in this survey, we provide for the first time an end-to-end view of modern ER workflows, and of the novel aspects of entity indexing and matching methods in order to cope with more than one of the Big Data characteristics simultaneously. We present the basic concepts, processing steps and execution strategies that have been proposed by different communities, i.e., database, semantic Web and machine learning, in order to cope with the loose structuredness, extreme diversity, high speed and large scale of entity descriptions used by real-world applications. Finally, we provide a synthetic discussion of the existing approaches, and conclude with a detailed presentation of open research directions

    Schema decision trees for heterogeneous JSON arrays

    Get PDF
    Due to the popularity of the JavaScript Object Notation (JSON), a need has arisen for the creation of schema documents for the purpose of validating the content of other JSON documents. Existing automatic schema generation tools, however, have not adequately considered the scenario of an array of JSON objects with different types of structures. These tools work off the assumption that all objects have the same structure, and thus, only generate a single schema combining them together. To address this problem, this thesis looks to improve upon schema generation for heterogeneous JSON arrays. We develop an algorithm to determine a set of keys that identifies what type of structure each element has. These keys are then used as the basis for a schema decision tree. The objective of this tree is to help in the validation process by allowing each element to be compared against a single, more tailored, schema

    Algoritmos de pré-processamento para uniformização de instâncias XML heterogêneas

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-graduação em Ciência da ComputaçãoO aumento no volume de informações disponíveis na Web torna necessário sistemas cada vez mais práticos e eficientes na coleta e integração destas informações, para fins de consulta. Um dos formatos mais utilizados para disponibilizar as informações na Web é O XML. O XML, dada a sua natureza dinâmica, permite representações completas e adequadas dos mais diferentes domínios de dados. Ao mesmo tempo, esta natureza dinâmica lhe confere aspectos que tornam complexa a integração de dados neste formato. Este trabalho vem ao encontro deste problema, provendo um conjunto de técnicas de pré-processamento para uniformizar as estruturas de dados no formato XML. Esta uniformização, que busca respeitar a semântica dos dados, visa facilitar a comparação e posterior integração por abordagens já existentes para comparação e integração de dados. Através de estudos de caso e experimentos, demonstra-se como os pré-processamentos sugeridos influem positivamente nos resultados de trabalhos existentes

    Handling metadata in the scope of coreference detection in data collections

    Get PDF

    Algorithmic Foundations of Heuristic Search using Higher-Order Polygon Inequalities

    Get PDF
    The shortest path problem in graphs is both a classic combinatorial optimization problem and a practical problem that admits many applications. Techniques for preprocessing a graph are useful for reducing shortest path query times. This dissertation studies the foundations of a class of algorithms that use preprocessed landmark information and the triangle inequality to guide A* search in graphs. A new heuristic is presented for solving shortest path queries that enables the use of higher order polygon inequalities. We demonstrate this capability by leveraging distance information from two landmarks when visiting a vertex as opposed to the common single landmark paradigm. The new heuristic’s novel feature is that it computes and stores a reduced amount of preprocessed information (in comparison to previous landmark-based algorithms) while enabling more informed search decisions. We demonstrate that domination of this heuristic over its predecessor depends on landmark selection and that, in general, the denser the landmark set, the better heuristic performs. Due to the reduced memory requirement, this new heuristic admits much denser landmark sets. We conduct experiments to characterize the impact that landmark configurations have on this new heuristic, demonstrating that centrality-based landmark selection has the best tradeoff between preprocessing and runtime. Using a developed graph library and static information from benchmark road map datasets, the algorithm is compared experimentally with previous landmark-based shortest path techniques in a fixed-memory environment to demonstrate a reduction in overall computational time and memory requirements. Experimental results are evaluated to detail the significance of landmark selection and density, the tradeoffs of performing preprocessing, and the practical use cases of the algorithm
    corecore