254 research outputs found

    A Design and Development of the Learning Contents Management based on the Personalized Online Learning

    Get PDF
    Teaching-learning methods are undergoing rapid transformation in terms of new information and communication technology and in accordance with onset of the 4th Industrial Revolution. The educational environment is being transformed into various forms, with examples being found not only in the existing traditional educational environment, but also in online education and blended learning. Existing online learning (LMS, LCMS) is offered in a limited contents transmission online educational environment, and has been limited but the level of support offered to a learner’s personalized learning. This study will overview existing flexible model of contents, suggest possible problems, and attempt to solve these problems. LCMS was designed and realized based on the open source Moodle platform, offering personalized contents to learners. LCMS is composed of the following 3 functions: contents registration of metadata inputted by administrator; search functionality for personalized learner contents; and personalized contents automatically being recommended to learners. As a result of the research, we made online learning environment that can provide customized learning recommendation and self - directed learning by increasing the continuity and efficiency of learning by automatically providing customized online contents to learners. Through this study, the learning of students promises to be effectively initiated by being based on available LCMS functions related to personalized educational contents in online education

    Hypermedia Learning Objects System - On the Way to a Semantic Educational Web

    Full text link
    While eLearning systems become more and more popular in daily education, available applications lack opportunities to structure, annotate and manage their contents in a high-level fashion. General efforts to improve these deficits are taken by initiatives to define rich meta data sets and a semanticWeb layer. In the present paper we introduce Hylos, an online learning system. Hylos is based on a cellular eLearning Object (ELO) information model encapsulating meta data conforming to the LOM standard. Content management is provisioned on this semantic meta data level and allows for variable, dynamically adaptable access structures. Context aware multifunctional links permit a systematic navigation depending on the learners and didactic needs, thereby exploring the capabilities of the semantic web. Hylos is built upon the more general Multimedia Information Repository (MIR) and the MIR adaptive context linking environment (MIRaCLE), its linking extension. MIR is an open system supporting the standards XML, Corba and JNDI. Hylos benefits from manageable information structures, sophisticated access logic and high-level authoring tools like the ELO editor responsible for the semi-manual creation of meta data and WYSIWYG like content editing.Comment: 11 pages, 7 figure

    On the automatic compilation of e-learning models to planning

    Full text link
    [EN] This paper presents a general approach to automatically compile e-learning models to planning, allowing us to easily generate plans, in the form of learning designs, by using existing domain-independent planners. The idea is to compile, first, a course defined in a standard e-learning language into a planning domain, and, second, a file containing students learning information into a planning problem. We provide a common compilation and extend it to three particular approaches that cover a full spectrum of planning paradigms, which increases the possibilities of using current planners: (i) hierarchical, (ii) including PDDL (Planning Domain Definition Language) actions with conditional effects and (iii) including PDDL durative actions. The learning designs are automatically generated from the plans and can be uploaded, and subsequently executed, by learning management platforms. We also provide an extensive analysis of the e-learning metadata specification required for planning, and the pros and cons on the knowledge engineering procedures used in each of the three compilations. Finally, we include some qualitative and quantitative experimentation of the compilations in several domain-independent planners to measure its scalability and applicability.This work has been supported by the Spanish MICINN under projects TIN2008-06701-C03 and Consolider Ingenio 2010 CSD2007-00022, by the Mexican National Council of Science and Technology and the regional projects CCG08-UC3M/TIC-4141 and Prometeo GVA 2008/051.Garrido Tejero, A.; Fernandez, S.; Onaindia De La Rivaherrera, E.; Morales, L.; Borrajo, D.; Castillo, L. (2013). On the automatic compilation of e-learning models to planning. Knowledge Engineering Review. 28(2):121-136. https://doi.org/10.1017/S0269888912000380S121136282Garrido A. , Onaindía E. 2010. On the application of planning and scheduling techniques to E-learning. In Proceedings of the 23rd International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA-AIE 2010)—Lecture Notes in Computer Science 6096, 244–253. Springer.Ullrich C 2008. Pedagogically founded courseware generation for web-based learning, No. 5260, Lecture Notes in Artificial Intelligence 5260, Springer.Sicilia M.A. , Sánchez-Alonso S. , García-Barriocanal E. 2006. On supporting the process of learning design through planners. CEUR Workshop Proceedings: Virtual Campus 2006 Post-Proceedings. Barcelona, Spain, 186(1), 81–89.IMSLD 2003. IMS Learning Design Specification. Version 1.0 (February, 2003). Retrieved December, 2012, from http://www.imsglobal.org/learningdesign.Sharable Content Object Reference Model (SCORM) 2004. Retrieved December, 2012, from http://scorm.com.Garrido A. , Onaindia E. , Morales L. , Castillo L. , Fernandez S. , Borrajo D. 2009. Modeling E-learning activities in automated planning. In Proceedings of the 3rd International Competition on Knowledge Engineering for Planning and Scheduling (ICKEPS-2009), Thessaloniki, Greece, 18–27.Essalmi, F., Ayed, L. J. B., Jemni, M., Kinshuk, & Graf, S. (2010). A fully personalization strategy of E-learning scenarios. Computers in Human Behavior, 26(4), 581-591. doi:10.1016/j.chb.2009.12.010Camacho D. , R-Moreno M.D. , Obieta U. 2007. CAMOU: a simple integrated e-learning and planning techniques tool. In 4th International Workshop on Constraints and Language Processing, Roskilde University, Denmark, 1–11.Fox, M., & Long, D. (2003). PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains. Journal of Artificial Intelligence Research, 20, 61-124. doi:10.1613/jair.1129KONTOPOULOS, E., VRAKAS, D., KOKKORAS, F., BASSILIADES, N., & VLAHAVAS, I. (2008). An ontology-based planning system for e-course generation. Expert Systems with Applications, 35(1-2), 398-406. doi:10.1016/j.eswa.2007.07.034Fuentetaja R. , Borrajo D. , Linares López C. 2009. A look-ahead B&B search for cost-based planning. In Proceedings of CAEPIA'09, Murcia, Spain, 105–114.Limongelli C. , Sciarrone F. , Vaste G. 2008. LS-plan: an effective combination of dynamic courseware generation and learning styles in web-based education. In Adaptive Hypermedia and Adaptive Web-Based Systems, 5th International Conference, AH 2008, Nejdl, W., Kay, J., Pu, P. & Herder, E. (eds.)., 133–142. Springer.Castillo L. , Fdez.-Olivares J. , García-Perez O. Palao F. 2006. Efficiently handling temporal knowledge in an HTN planner. In Proceedings of 16th International Conference on Automated Planning and Scheduling (ICAPS 2006), Borrajo, D. & McCluskey, L. (eds.). AAAI, 63–72.Castillo, L., Morales, L., González-Ferrer, A., Fdez-Olivares, J., Borrajo, D., & Onaindía, E. (2009). Automatic generation of temporal planning domains for e-learning problems. Journal of Scheduling, 13(4), 347-362. doi:10.1007/s10951-009-0140-xUllrich, C., & Melis, E. (2009). Pedagogically founded courseware generation based on HTN-planning. Expert Systems with Applications, 36(5), 9319-9332. doi:10.1016/j.eswa.2008.12.043Boticario J. , Santos O. 2007. A dynamic assistance approach to support the development and modelling of adaptive learning scenarion based on educational standards. In Proceedings of Workshop on Authoring of Adaptive and Adaptable Hypermedia, International Conference on User Modelling, Corfu, Greece, 1–8.IMSMD 2003. IMS Learning Resource Meta-data Specification. Version 1.3 (August, 2006). Retrieved December, 2012, from http://www.imsglobal.org/metadata.Mohan P. , Greer J. , McCalla G. 2003. Instructional planning with learning objects. In IJCAI-03 Workshop Knowledge Representation and Automated Reasoning for E-Learning Systems, Acapulco, Mexico, 52–58.Alonso C. , Honey P. 2002. Honey-alonso Learning Style Theoretical Basis (in Spanish). Retrieved December 2012, from http://www.estilosdeaprendizaje.es/menuprinc2.htm

    Knowledge extraction from unstructured data and classification through distributed ontologies

    Get PDF
    The World Wide Web has changed the way humans use and share any kind of information. The Web removed several access barriers to the information published and has became an enormous space where users can easily navigate through heterogeneous resources (such as linked documents) and can easily edit, modify, or produce them. Documents implicitly enclose information and relationships among them which become only accessible to human beings. Indeed, the Web of documents evolved towards a space of data silos, linked each other only through untyped references (such as hypertext references) where only humans were able to understand. A growing desire to programmatically access to pieces of data implicitly enclosed in documents has characterized the last efforts of the Web research community. Direct access means structured data, thus enabling computing machinery to easily exploit the linking of different data sources. It has became crucial for the Web community to provide a technology stack for easing data integration at large scale, first structuring the data using standard ontologies and afterwards linking them to external data. Ontologies became the best practices to define axioms and relationships among classes and the Resource Description Framework (RDF) became the basic data model chosen to represent the ontology instances (i.e. an instance is a value of an axiom, class or attribute). Data becomes the new oil, in particular, extracting information from semi-structured textual documents on the Web is key to realize the Linked Data vision. In the literature these problems have been addressed with several proposals and standards, that mainly focus on technologies to access the data and on formats to represent the semantics of the data and their relationships. With the increasing of the volume of interconnected and serialized RDF data, RDF repositories may suffer from data overloading and may become a single point of failure for the overall Linked Data vision. One of the goals of this dissertation is to propose a thorough approach to manage the large scale RDF repositories, and to distribute them in a redundant and reliable peer-to-peer RDF architecture. The architecture consists of a logic to distribute and mine the knowledge and of a set of physical peer nodes organized in a ring topology based on a Distributed Hash Table (DHT). Each node shares the same logic and provides an entry point that enables clients to query the knowledge base using atomic, disjunctive and conjunctive SPARQL queries. The consistency of the results is increased using data redundancy algorithm that replicates each RDF triple in multiple nodes so that, in the case of peer failure, other peers can retrieve the data needed to resolve the queries. Additionally, a distributed load balancing algorithm is used to maintain a uniform distribution of the data among the participating peers by dynamically changing the key space assigned to each node in the DHT. Recently, the process of data structuring has gained more and more attention when applied to the large volume of text information spread on the Web, such as legacy data, news papers, scientific papers or (micro-)blog posts. This process mainly consists in three steps: \emph{i)} the extraction from the text of atomic pieces of information, called named entities; \emph{ii)} the classification of these pieces of information through ontologies; \emph{iii)} the disambigation of them through Uniform Resource Identifiers (URIs) identifying real world objects. As a step towards interconnecting the web to real world objects via named entities, different techniques have been proposed. The second objective of this work is to propose a comparison of these approaches in order to highlight strengths and weaknesses in different scenarios such as scientific and news papers, or user generated contents. We created the Named Entity Recognition and Disambiguation (NERD) web framework, publicly accessible on the Web (through REST API and web User Interface), which unifies several named entity extraction technologies. Moreover, we proposed the NERD ontology, a reference ontology for comparing the results of these technologies. Recently, the NERD ontology has been included in the NIF (Natural language processing Interchange Format) specification, part of the Creating Knowledge out of Interlinked Data (LOD2) project. Summarizing, this dissertation defines a framework for the extraction of knowledge from unstructured data and its classification via distributed ontologies. A detailed study of the Semantic Web and knowledge extraction fields is proposed to define the issues taken under investigation in this work. Then, it proposes an architecture to tackle the single point of failure issue introduced by the RDF repositories spread within the Web. Although the use of ontologies enables a Web where data is structured and comprehensible by computing machinery, human users may take advantage of it especially for the annotation task. Hence, this work describes an annotation tool for web editing, audio and video annotation in a web front end User Interface powered on the top of a distributed ontology. Furthermore, this dissertation details a thorough comparison of the state of the art of named entity technologies. The NERD framework is presented as technology to encompass existing solutions in the named entity extraction field and the NERD ontology is presented as reference ontology in the field. Finally, this work highlights three use cases with the purpose to reduce the amount of data silos spread within the Web: a Linked Data approach to augment the automatic classification task in a Systematic Literature Review, an application to lift educational data stored in Sharable Content Object Reference Model (SCORM) data silos to the Web of data and a scientific conference venue enhancer plug on the top of several data live collectors. Significant research efforts have been devoted to combine the efficiency of a reliable data structure and the importance of data extraction techniques. This dissertation opens different research doors which mainly join two different research communities: the Semantic Web and the Natural Language Processing community. The Web provides a considerable amount of data where NLP techniques may shed the light within it. The use of the URI as a unique identifier may provide one milestone for the materialization of entities lifted from a raw text to real world object

    On the use of case-based planning for e-learning personalization

    Full text link
    This is the author’s version of a work that was accepted for publication in Expert Systems with Applications. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Expert Systems with Applications, 60, 1-15, 2016. DOI:10.1016/j.eswa.2016.04.030In this paper we propose myPTutor, a general and effective approach which uses AI planning techniques to create fully tailored learning routes, as sequences of Learning Objects (LOs) that fit the pedagogical and students’ requirements. myPTutor has a potential applicability to support e-learning personalization by producing, and automatically solving, a planning model from (and to) e-learning standards in a vast number of real scenarios, from small to medium/large e-learning communities. Our experiments demonstrate that we can solve scenarios with large courses and a high number of students. Therefore, it is perfectly valid for schools, high schools and universities, especially if they already use Moodle, on top of which we have implemented myPTutor. It is also of practical significance for repairing unexpected discrepancies (while the students are executing their learning routes) by using a Case-Based Planning adaptation process that reduces the differences between the original and the new route, thus enhancing the learning process. © 2016 Elsevier Ltd. All rights reserved.This work has been partially funded by the Consolider AT project CSD2007-0022 INGENIO 2010 of the Spanish Ministry of Science and Innovation, the MICINN project TIN2011-27652-C03-01, the MINECO and FEDER project TIN2014-55637-C2-2-R, the Mexican National Council of Science and Technology, the Valencian Prometeo project II/2013/019 and the BW5053 research project of the Free University of Bozen-Bolzano.Garrido Tejero, A.; Morales, L.; Serina, I. (2016). On the use of case-based planning for e-learning personalization. Expert Systems with Applications. 60:1-15. https://doi.org/10.1016/j.eswa.2016.04.030S1156

    Defining a metadata schema for serious games as learning objects

    Get PDF
    Games are increasingly recognized for their educational potential. However, when used as a learning resource, games can differ substantially from other educational media. They often combine high-fidelity audio and video content with experiential, social, or exploratory pedagogy. As educators increasingly turn to technology to support the delivery and management of content, the capability to describe and package serious games effectively as reusable learning objects (LOs) is increasingly vital. Doing so requires developing the capability to express games not in terms of technical boundaries, but as coherent and discrete LOs, which can be reused and combined. Enabling this requires metadata be attached to games, whilst making the metadata schema explicit to allow the use of the metadata beyond its original scope. Furthermore, standardisation of metadata schema means that systems are able to work together and use data interchangeably. However, current standards for describing educational content cannot directly be utilized to describe these serious games as educational resources. This makes it difficult to include serious games in repositories of learning objects and to describe them in a coherent way in the various online repositories. This paper introduces a metadata schema for describing serious games as educational resources, based on existing standards, so that serious games content can be described within online repositories

    Extending the 5S Framework of Digital Libraries to support Complex Objects, Superimposed Information, and Content-Based Image Retrieval Services

    Get PDF
    Advanced services in digital libraries (DLs) have been developed and widely used to address the required capabilities of an assortment of systems as DLs expand into diverse application domains. These systems may require support for images (e.g., Content-Based Image Retrieval), Complex (information) Objects, and use of content at fine grain (e.g., Superimposed Information). Due to the lack of consensus on precise theoretical definitions for those services, implementation efforts often involve ad hoc development, leading to duplication and interoperability problems. This article presents a methodology to address those problems by extending a precisely specified minimal digital library (in the 5S framework) with formal definitions of aforementioned services. The theoretical extensions of digital library functionality presented here are reinforced with practical case studies as well as scenarios for the individual and integrative use of services to balance theory and practice. This methodology has implications that other advanced services can be continuously integrated into our current extended framework whenever they are identified. The theoretical definitions and case study we present may impact future development efforts and a wide range of digital library researchers, designers, and developers
    corecore