2,440 research outputs found

    Ontology-Based Open-Corpus Personalization for E-Learning

    Get PDF
    Conventional closed-corpus adaptive information systems control limited sets of documents in predefined domains and cannot provide access to the external content. Such restrictions contradict the requirements of today, when most of the information systems are implemented in the open document space of the World Wide Web and are expected to operate on the open-corpus content. In order to provide personalized access to open-corpus documents, an adaptive system should be able to maintain modeling of new documents in terms of domain knowledge automatically and dynamically. This dissertation explores the problem of open-corpus personalization and semantic modeling of open-corpus content in the context of e-Learning. Information on the World Wide Web is not without structure. Many collections of online instructional material (tutorials, electronic books, digital libraries, etc.) have been provided with implicit knowledge models encoded in form of tables of content, indexes, headers of chapters, links between pages, and different styles of text fragments. The main dissertation approach tries to leverage this layer of hidden semantics by extracting and representing it as coarse-grained models of content collections. A central domain ontology is used to maintain overlay modeling of students’ knowledge and serves as a reference point for multiple collections of external instructional material. In order to establish the link between the ontology and the open-corpus content models a special ontology mapping algorithm has been developed. The proposed approach has been applied in the Ontology-based Open-corpus Personalization Service that recommends and adaptively annotates online reading material. The domain of Java programming has been chosen for the proof-of-concept implementation. A controlled experiment has been organized to evaluate the developed adaptive system and the proposed approach overall. The results of the evaluation have demonstrated several significant learning effects of the implemented open-corpus personalization. The analysis of log-based data has also shown that the open-corpus version of the system is capable of providing personalization of similar quality to the close-corpus one. Such results indicate that the proposed approach successfully supports open-corpus personalization for e-Learning. Further research is required to verify if the approach remains effective in other subject domains and with other types of instructional content

    Light-weight ontologies for scrutable user modelling

    Get PDF
    This thesis is concerned with the ways light-weight ontologies can support scrutability for large user models and the user modelling process. It explores the role that light-weight ontologies can play, and how they can be exploited, for the purpose of creating and maintaining large, scrutable user models consisting of hundreds of components. We address problems in four key areas: ontology creation, metadata annotation, creation and maintenance of large user models, and user model visualisation, with a goal to provide a simple and adaptable approach that maintains scrutability. Each of these key areas presents a number of challenges that we address. Our solution is the development of a toolkit, LOSUM, which consists of a number of tools to support the user modelling process. It incorporates light-weight ontologies to fulfill a number of roles: aiding in metadata creation, providing structure for large user model visualisation, and as a means to reason across granularities in the user model. In conjunction with this, LOSUM also features a novel visualisation tool, SIV, which performs a dual role of ontology and user model visualisation, supporting the process of ontology creation, metadata annotation, and user model visualisation. We evaluated our approach at each stage with small user studies, and conducted a large scale integrative evaluation of these approaches together in an authentic learning context with 114 students, of whom 77 had exposure to their learner models through SIV. The results showed that students could use the interface and understand the process of user model construction. The flexibility and adaptability of the toolkit has also been demonstrated in its deployment in several other application areas

    User modeling for exploratory search on the Social Web. Exploiting social bookmarking systems for user model extraction, evaluation and integration

    Get PDF
    Exploratory search is an information seeking strategy that extends be- yond the query-and-response paradigm of traditional Information Retrieval models. Users browse through information to discover novel content and to learn more about the newly discovered things. Social bookmarking systems integrate well with exploratory search, because they allow one to search, browse, and filter social bookmarks. Our contribution is an exploratory tag search engine that merges social bookmarking with exploratory search. For this purpose, we have applied collaborative filtering to recommend tags to users. User models are an im- portant prerequisite for recommender systems. We have produced a method to algorithmically extract user models from folksonomies, and an evaluation method to measure the viability of these user models for exploratory search. According to our evaluation web-scale user modeling, which integrates user models from various services across the Social Web, can improve exploratory search. Within this thesis we also provide a method for user model integra- tion. Our exploratory tag search engine implements the findings of our user model extraction, evaluation, and integration methods. It facilitates ex- ploratory search on social bookmarks from Delicious and Connotea and pub- lishes extracted user models as Linked Data

    Spatial description-based approach towards integration of biomedical atlases

    Get PDF
    Biomedical imaging has become ubiquitous in both basic research and the clinical sciences. As technology advances the resulting multitude of imaging modalities has led to a sharp rise in the quantity and quality of such images. Whether for epi- demiological studies, educational uses, clinical monitoring, or translational science purposes, the ability to integrate and compare such image-based data has become in- creasingly critical in the life sciences and eHealth domain. Ontology-based solutions often lack spatial precision. Image processing-based solutions may have di culties when the underlying morphologies are too di erent. This thesis proposes a compro- mise solution which captures location in biomedical images via spatial descriptions. Three approaches of spatial descriptions have been explored. These include: (1) spatial descriptions based on spatial relationships between segmented regions; (2) spatial descriptions based on ducial points and a set of spatial relations; and (3) spatial descriptions based on ducial points and a set of spatial relations, integrated with spatial relations between segmented regions. Evaluation, particularly in the context of mouse gene expression data, a good representative of spatio-temporal bi- ological data, suggests that the spatial description-based solution can provide good spatial precision. This dissertation discusses the need for biomedical image data in- tegration, the shortcomings of existing solutions and proposes new algorithms based on spatial descriptions of anatomical details in the image. Evaluation studies, par- ticularly in the context of gene expression data analysis, were carried out to study the performance of the new algorithms

    Investigating Automated Student Modeling in a Java MOOC

    Get PDF
    With the advent of ubiquitous web, programming is no longer a sole prerogative of computer science schools. Scripting languages are taught to wider audiences and programming has become a flag post of any technology related program. As more and more students are exposed to coding, it is no longer a trade of the select few. As a result, students who would not opt for a coding class a decade ago are in a position of having to learn a rather difficult subject. The problem of assisting students in learning programming has been explored in several intelligent tutoring systems. The key component of such systems is a student model that keeps track of student progress. In turn, the foundation of a student model is a domain model – a vocabulary of skills (or concepts) that structures the representation of student knowledge. Building domain models for programming is known as a complicated task. In this paper we explore automated approaches for extracting domain models for learning programming languages and modeling student knowledge in the process of solving programming exercises. We evaluate the validity of this approach using large volume of student code submission data from a MOOC on introductory Java programming

    Ontologies for automatic question generation

    Get PDF
    Assessment is an important tool for formal learning, especially in higher education. At present, many universities use online assessment systems where questions are entered manually into a question bank system. This kind of system requires the instructor’s time and effort to construct questions manually. The main aim of this thesis is, therefore, to contribute to the investigation of new question generation strategies for short/long answer questions in order to allow for the development of automatic factual question generation from an ontology for educational assessment purposes. This research is guided by four research questions: (1) How well can an ontology be used for generating factual assessment questions? (2) How can questions be generated from course ontology? (3) Are the ontological question generation strategies able to generate acceptable assessment questions? and (4) Do the topic-based indexing able to improve the feasibility of AQGen. We firstly conduct ontology validation to evaluate the appropriateness of concept representation using a competency question approach. We used revision questions from the textbook to obtain keyword (in revision questions) and a concept (in the ontology) matching. The results show that only half of the ontology concepts matched the keywords. We took further investigation on the unmatched concepts and found some incorrect concept naming and later suggest a guideline for an appropriate concept naming. At the same time, we introduce validation of ontology using revision questions as competency questions to check for ontology completeness. Furthermore, we also proposed 17 short/long answer question templates for 3 question categories, namely definition, concept completion and comparison. In the subsequent part of the thesis, we develop the AQGen tool and evaluate the generated questions. Two Computer Science subjects, namely OS and CNS, are chosen to evaluate AQGen generated questions. We conduct a questionnaire survey from 17 domain experts to identify experts’ agreement on the acceptability measure of AQGen generated questions. The experts’ agreements for acceptability measure are favourable, and it is reported that three of the four QG strategies proposed can generate acceptable questions. It has generated thousands of questions from the 3 question categories. AQGen is updated with question selection to generate a feasible question set from a tremendous amount of generated questions before. We have suggested topic-based indexing with the purpose to assert knowledge about topic chapters into ontology representation for question selection. The topic indexing shows a feasible result for filtering question by topics. Finally, our results contribute to an understanding of ontology element representation for question generations and how to automatically generate questions from ontology for education assessment

    Spatial ontologies for architectural heritage

    Get PDF
    Informatics and artificial intelligence have generated new requirements for digital archiving, information, and documentation. Semantic interoperability has become fundamental for the management and sharing of information. The constraints to data interpretation enable both database interoperability, for data and schemas sharing and reuse, and information retrieval in large datasets. Another challenging issue is the exploitation of automated reasoning possibilities. The solution is the use of domain ontologies as a reference for data modelling in information systems. The architectural heritage (AH) domain is considered in this thesis. The documentation in this field, particularly complex and multifaceted, is well-known to be critical for the preservation, knowledge, and promotion of the monuments. For these reasons, digital inventories, also exploiting standards and new semantic technologies, are developed by international organisations (Getty Institute, ONU, European Union). Geometric and geographic information is essential part of a monument. It is composed by a number of aspects (spatial, topological, and mereological relations; accuracy; multi-scale representation; time; etc.). Currently, geomatics permits the obtaining of very accurate and dense 3D models (possibly enriched with textures) and derived products, in both raster and vector format. Many standards were published for the geographic field or in the cultural heritage domain. However, the first ones are limited in the foreseen representation scales (the maximum is achieved by OGC CityGML), and the semantic values do not consider the full semantic richness of AH. The second ones (especially the core ontology CIDOC – CRM, the Conceptual Reference Model of the Documentation Commettee of the International Council of Museums) were employed to document museums’ objects. Even if it was recently extended to standing buildings and a spatial extension was included, the integration of complex 3D models has not yet been achieved. In this thesis, the aspects (especially spatial issues) to consider in the documentation of monuments are analysed. In the light of them, the OGC CityGML is extended for the management of AH complexity. An approach ‘from the landscape to the detail’ is used, for considering the monument in a wider system, which is essential for analysis and reasoning about such complex objects. An implementation test is conducted on a case study, preferring open source applications

    Big Data in Management Research. Exploring New Avenues

    Get PDF

    Big Data in Management Research. Exploring New Avenues

    Get PDF
    • …
    corecore