276 research outputs found

    Constraint-based validation of e-learning courseware

    Get PDF

    Semantics-Enhanced E-learning Courses

    Get PDF

    Knowledge extraction from unstructured data and classification through distributed ontologies

    Get PDF
    The World Wide Web has changed the way humans use and share any kind of information. The Web removed several access barriers to the information published and has became an enormous space where users can easily navigate through heterogeneous resources (such as linked documents) and can easily edit, modify, or produce them. Documents implicitly enclose information and relationships among them which become only accessible to human beings. Indeed, the Web of documents evolved towards a space of data silos, linked each other only through untyped references (such as hypertext references) where only humans were able to understand. A growing desire to programmatically access to pieces of data implicitly enclosed in documents has characterized the last efforts of the Web research community. Direct access means structured data, thus enabling computing machinery to easily exploit the linking of different data sources. It has became crucial for the Web community to provide a technology stack for easing data integration at large scale, first structuring the data using standard ontologies and afterwards linking them to external data. Ontologies became the best practices to define axioms and relationships among classes and the Resource Description Framework (RDF) became the basic data model chosen to represent the ontology instances (i.e. an instance is a value of an axiom, class or attribute). Data becomes the new oil, in particular, extracting information from semi-structured textual documents on the Web is key to realize the Linked Data vision. In the literature these problems have been addressed with several proposals and standards, that mainly focus on technologies to access the data and on formats to represent the semantics of the data and their relationships. With the increasing of the volume of interconnected and serialized RDF data, RDF repositories may suffer from data overloading and may become a single point of failure for the overall Linked Data vision. One of the goals of this dissertation is to propose a thorough approach to manage the large scale RDF repositories, and to distribute them in a redundant and reliable peer-to-peer RDF architecture. The architecture consists of a logic to distribute and mine the knowledge and of a set of physical peer nodes organized in a ring topology based on a Distributed Hash Table (DHT). Each node shares the same logic and provides an entry point that enables clients to query the knowledge base using atomic, disjunctive and conjunctive SPARQL queries. The consistency of the results is increased using data redundancy algorithm that replicates each RDF triple in multiple nodes so that, in the case of peer failure, other peers can retrieve the data needed to resolve the queries. Additionally, a distributed load balancing algorithm is used to maintain a uniform distribution of the data among the participating peers by dynamically changing the key space assigned to each node in the DHT. Recently, the process of data structuring has gained more and more attention when applied to the large volume of text information spread on the Web, such as legacy data, news papers, scientific papers or (micro-)blog posts. This process mainly consists in three steps: \emph{i)} the extraction from the text of atomic pieces of information, called named entities; \emph{ii)} the classification of these pieces of information through ontologies; \emph{iii)} the disambigation of them through Uniform Resource Identifiers (URIs) identifying real world objects. As a step towards interconnecting the web to real world objects via named entities, different techniques have been proposed. The second objective of this work is to propose a comparison of these approaches in order to highlight strengths and weaknesses in different scenarios such as scientific and news papers, or user generated contents. We created the Named Entity Recognition and Disambiguation (NERD) web framework, publicly accessible on the Web (through REST API and web User Interface), which unifies several named entity extraction technologies. Moreover, we proposed the NERD ontology, a reference ontology for comparing the results of these technologies. Recently, the NERD ontology has been included in the NIF (Natural language processing Interchange Format) specification, part of the Creating Knowledge out of Interlinked Data (LOD2) project. Summarizing, this dissertation defines a framework for the extraction of knowledge from unstructured data and its classification via distributed ontologies. A detailed study of the Semantic Web and knowledge extraction fields is proposed to define the issues taken under investigation in this work. Then, it proposes an architecture to tackle the single point of failure issue introduced by the RDF repositories spread within the Web. Although the use of ontologies enables a Web where data is structured and comprehensible by computing machinery, human users may take advantage of it especially for the annotation task. Hence, this work describes an annotation tool for web editing, audio and video annotation in a web front end User Interface powered on the top of a distributed ontology. Furthermore, this dissertation details a thorough comparison of the state of the art of named entity technologies. The NERD framework is presented as technology to encompass existing solutions in the named entity extraction field and the NERD ontology is presented as reference ontology in the field. Finally, this work highlights three use cases with the purpose to reduce the amount of data silos spread within the Web: a Linked Data approach to augment the automatic classification task in a Systematic Literature Review, an application to lift educational data stored in Sharable Content Object Reference Model (SCORM) data silos to the Web of data and a scientific conference venue enhancer plug on the top of several data live collectors. Significant research efforts have been devoted to combine the efficiency of a reliable data structure and the importance of data extraction techniques. This dissertation opens different research doors which mainly join two different research communities: the Semantic Web and the Natural Language Processing community. The Web provides a considerable amount of data where NLP techniques may shed the light within it. The use of the URI as a unique identifier may provide one milestone for the materialization of entities lifted from a raw text to real world object

    Learning objects management: theory and practice

    Get PDF
    En este artĆ­culo los autores presentan un proyecto piloto en el diseƱo, implementaciĆ³n y evaluaciĆ³n de objetos de aprendizaje en el campo de la enseƱanza universitaria, con una especial atenciĆ³n en el desarrollo de una tipologĆ­a de metadatos y un control de calidad de las herramientas utilizadas, concluyendo con un resumen y un anĆ”lisis de los resultados finales.Although LO management is an interesting subject to study due to the current interoperability potential, it is not promoted very much because a number of issues remain to be resolved. LOs need to be designed to achieve educational goals, and the metadata schema must have the kind of information to make them reusable in other contexts. This paper presents a pilot project in the design, implementation and evaluation of learning objects in the field of university education, with a specific focus on the development of a metadata Typology and quality evaluate tool, concluding with a summary and analysis of the end results

    E-Learning and Intelligent Planning: Improving Content Personalization

    Full text link
    Combining learning objects is a challenging topic because of its direct application to curriculum generation, tailored to the students' profiles and preferences. Intelligent planning allows us to adapt learning routes (i.e. sequences of learning objects), thus highly improving the personalization of contents, the pedagogical requirements and specific necessities of each student. This paper presents a general and effective approach to extract metadata information from the e-learning contents, a form of reusable learning objects, to generate a planning domain in a simple, automated way. Such a domain is used by an intelligent planner that provides an integrated recommendation system, which adapts, stores and reuses the best learning routes according to the students' profiles and course objectives. If any inconsistency happens during the route execution, e.g. the student fails to pass an assessment test which prevents him/her from continuing the natural course of the route, the systeGarrido, A.; Morales, L. (2014). E-Learning and Intelligent Planning: Improving Content Personalization. IEEE Revista Iberoamericana de TecnologĆ­as del Aprendizaje. 9(1):1-7. doi:10.1109/RITA.2014.2301886S179

    Implementing OBDA for an end-user query answering service on an educational ontology

    Get PDF
    In the age where productivity of society is no longer defined by the amount of information generated, but from the quality and assertiveness that a set of data may potentially hold, the right questions to do depends on the semantic awareness capability that an information system could evolve into. To address this challenge, in the last decade, exhaustive research has been done in the Ontology Based Data Access (OBDA) paradigm. A conspectus of the most promising technologies with data integration capabilities and the foundations where they rely are documented in this memory as a point of reference for choosing tools that supports the incorporation of a conceptual model under a OBDA method. The present study provides a practical approach for implementing an ontology based data access service, to educational context users of a Learning Analytics initiative, by means of allowing them to formulate intuitive enquiries with a familiar domain terminology on top of a Learning Management System. The ontology used was completely transformed to semantic linked data standards and some data mappings for testing were included. Semantic Linked Data technologies exposed in this document may exert modernization to environments in which object oriented and relational paradigms may propagate heterogeneous and contradictory requirements. Finally, to validate the implementation, a set of queries were constructed emulating the most relevant dynamics of the model regarding the dataset nature

    Collaborative learning utilizing a domain-based shared data repository to enhance learning outcomes

    Get PDF
    A number of learning paradigms have postulated that knowledge formation is a dynamic process where learners actively construct a representation of concepts integrating information from multiple sources. Current teaching strategies utilize a compartmentalized approach where individual courses contain a small subset of the knowledge required for a discipline. The intent of this research is to provide a framework to integrate the components of a discipline into a cohesive whole and accelerate the integration of concepts enhancing the learning process. The components utilized to accomplish these goals include two new knowledge integration models; a Knowledge Weighting Model (KWM) and the Aggregate-Integrate-Master (AIM) model. Semantic Web design principles utilizing a Resource Description Framework (RDF) schema and Web Ontology Language (OWL) will be used to define concepts and relationships for this knowledge domain that can then be extended for other domains. Lastly, a Design Research paradigm will be utilized to analyze the IT artifact, the Constructivist Unifying Baccalaureate Epistemology (CUBE) knowledge repository that was designed to validate this research. The prototype testing population utilized sixty students spanning five classes, in the fall 2007, following IRB approved protocols. Data was gathered using a Constructivist Multimedia Learning Survey (CMLES), focus groups and semi-structured interviews. This preliminary data supported the hypotheses that students using the Integrated Knowledge Repository will first; have a more positive perception of the learning process than those who use conventional single course teaching paradigms and second; students utilizing the IKR will develop a more complex understanding of the interconnected nature of the materials linking a discipline than those who take conventional single topic courses. Learning is an active process in which learners construct new ideas or concepts based upon their current/past knowledge. The goal is to develop a knowledge structure that is capable of facilitating the integration of conceptual development in a field of study

    Interlinking educational data to web of data

    Get PDF
    With the proliferation of educational data on the Web, publishing and interlinking eLearning resources have become an important issue nowadays. Educational resources are exposed under heterogeneous Intellectual Property Rights (IPRs) in different times and formats. Some resources are implicitly related to each other or to the interest, cultural and technical environment of learners. Linking educational resources to useful knowledge on the Web improves resource seeking. This becomes crucial for moving from current isolated eLearning repositories towards an open discovery space, including distributed resources irrespective of their geographic and system boundaries. Linking resources is also useful for enriching educational content, as it provides a richer context and other related information to both educators and learners. On the other hand, the emergence of the so-called "Linked Data" brings new opportunities for interconnecting different kinds of resources on the Web of Data. Using the Linked Data approach, data providers can publish structured data and establish typed links between them from various sources. To this aim, many tools, approaches and frameworks have been built to first expose the data as Linked Data formats and to second discover the similarities between entities in the datasets. The research carried out for this PhD thesis assesses the possibilities of applying the Linked Open Data paradigm to the enrichment of educational resources. Generally speaking, we discuss the interlinking educational objects and eLearning resources on the Web of Data focusing on existing schemas and tools. The main goals of this thesis are thus to cover the following aspects: -- Exposing the educational (meta)data schemas and particularly IEEE LOM as Linked Data -- Evaluating currently available interlinking tools in the Linked Data context -- Analyzing datasets in the Linked Open Data cloud, to discover appropriate datasets for interlinking -- Discussing the benefits of interlinking educational (meta)data in practice
    • ā€¦
    corecore