7 research outputs found

    TaxoFolk : a hybrid taxonomy–folksonomy classification for enhanced knowledge navigation

    Get PDF
    Accepted ManuscriptPublishe

    Ontology modularization: principles and practice

    Get PDF
    Technological advances have provided us with the capability to build large intelligent systems capable of using knowledge, which relies on being able to represent the knowledge in a way that machines can process and interpret. This is achieved by using ontologies; that is logical theories that capture the knowledge of a domain. It is widely accepted that ontology development is a non-trivial task and can be expedited through the reuse of existing ontologies. However, it is likely that the developer would only require a part of the original ontology; obtaining this part is the purpose of ontology modularization. In this thesis a graph traversal based technique for performing ontology module extraction is presented. We present an extensive evaluation of the various ontology modularization techniques in the literature; including a proposal for an entropy inspired measure. A task-based evaluation is included, which demonstrates that traversal based ontology module extraction techniques have comparable performance to the logical based techniques. Agents, autonomous software components, use ontologies in complex systems; with each agent having its own, possibly different, ontology. In such systems agents need to communicate and successful communication relies on the agents ability to reach an agreement on the terms they will use to communicate. Ontology modularization allows the agents to agree on only those terms relevant to the purpose of the communication. Thus, this thesis presents a novel application of ontology modularization as a space reduction mechanism for the dynamic selection of ontology alignments in multi-agent systems. The evaluation of this novel application shows that ontology modularization can reduce the search space without adversely affecting the quality of the agreed ontology alignment

    Adaptive intelligent personalised learning (AIPL) environment

    Get PDF
    As individuals the ideal learning scenario would be a learning environment tailored just for how we like to learn, personalised to our requirements. This has previously been almost inconceivable given the complexities of learning, the constraints within the environments in which we teach, and the need for global repositories of knowledge to facilitate this process. Whilst it is still not necessarily achievable in its full sense this research project represents a path towards this ideal.In this thesis, findings from research into the development of a model (the Adaptive Intelligent Personalised Learning (AIPL)), the creation of a prototype implementation of a system designed around this model (the AIPL environment) and the construction of a suite of intelligent algorithms (Personalised Adaptive Filtering System (PAFS)) for personalised learning are presented and evaluated. A mixed methods approach is used in the evaluation of the AIPL environment. The AIPL model is built on the premise of an ideal system being one which does not just consider the individual but also considers groupings of likeminded individuals and their power to influence learner choice. The results show that: (1) There is a positive correlation for using group-learning-paradigms. (2) Using personalisation as a learning aid can help to facilitate individual learning and encourage learning on-line. (3) Using learning styles as a way of identifying and categorising the individuals can improve their on-line learning experience. (4) Using Adaptive Information Retrieval techniques linked to group-learning-paradigms can reduce and improve the problem of mis-matching. A number of approaches for further work to extend and expand upon the work presented are highlighted at the end of the Thesis

    Production ascendante d'ontologies légÚres sur le web sémantique : une application au référencement de sections de cours

    Get PDF
    RÉSUMÉ Plusieurs initiatives de mĂ©tadonnĂ©es ont Ă©tĂ© proposĂ©es pour amĂ©liorer le repĂ©rage des contenus sur le Web. MalgrĂ© leur niveau de sophistication Ă©levĂ©, aucune des initiatives de mĂ©tadonnĂ©es ne rĂ©ussit Ă  s'adapter complĂštement au caractĂšre atomique des objets d'apprentissage. La nature complexe des objets d'apprentissage demande, en effet, l'utilisation de structures de mĂ©tadonnĂ©es flexibles pouvant ĂȘtre dĂ©finies sur mesure pour rĂ©pondre au contexte particulier de chaque objet. La dĂ©finition d'une structure de mĂ©tadonnĂ©es commune exige toutefois un effort de coordination important entre diffĂ©rents acteurs pour dĂ©finir, maintenir et mettre Ă  jour les Ă©lĂ©ments de descriptions. Le consensus nĂ©cessaire entre ces intervenants rend lui-mĂȘme encore plus difficile la personnalisation des Ă©lĂ©ments de descriptions. Il est toutefois possible de rĂ©aliser une description fine des objets d'apprentissage en insĂ©rant directement des annotations Ă  l'intĂ©rieur des contenus Web. Une annotation est une note, une explication, ou tout autre type de remarque externe pouvant ĂȘtre attachĂ©e Ă  un document sans toutefois ĂȘtre nĂ©cessairement insĂ©rĂ©e dans ce document. Il est aussi possible de prĂ©ciser la sĂ©mantique d'une annotation en utilisant des descriptions RDF (Resource Description Framework). Le RDF est une recommandation du W3C pour la description des ressources Web. Il s'agit d'un modĂšle de donnĂ©es pour la description de ressources qui peut donc ĂȘtre considĂ©rĂ©, Ă  ce titre, comme un modĂšle de mĂ©tadonnĂ©es (ou mĂ©ta-mĂ©tadonnĂ©e). Les expressions RDF se prĂ©sentent comme des triplets composĂ©s d'un sujet, d'un prĂ©dicat et d'un objet de relation. Les Ă©lĂ©ments de triplets RDF servent Ă  indiquer qu'une ressource possĂšde une propriĂ©tĂ© et une valeur donnĂ©e. Une expression RDF peut faire rĂ©fĂ©rence Ă  des ontologies pour prĂ©ciser le sens d'une ressource Web. Une ontologie dĂ©finit d'une maniĂšre formelle les connaissances communes d'un domaine particulier partagĂ©es entre diffĂ©rents utilisateurs. Les ontologies jouent ainsi le rĂŽle d'une langue universelle, une sorte d'interlingua, qui permet Ă  des gens ou des applications d'Ă©changer des informations sur une base commune. Ces informations concernent aussi bien les concepts que les rapports qui existent entre les diffĂ©rents Ă©lĂ©ments de connaissance d'un domaine La conception d'ontologies reste une opĂ©ration complexe qui demande un travail de rĂ©flexion important. Des ontologies rĂ©alisĂ©es de maniĂšre isolĂ©e par des individus diffĂ©rents peuvent ainsi donner naissance Ă  des descriptions trĂšs diffĂ©rentes d'un mĂȘme domaine. Une solution pour rĂ©duire l'hĂ©tĂ©rogĂ©nĂ©itĂ© structurale et sĂ©mantique des ontologies consiste Ă  mettre en place des Ă©quipes de travail qui rĂ©alisent ensemble la sĂ©lection et la dĂ©finition des Ă©lĂ©ments d'une ontologie commune. Le recours Ă  ces Ă©quipes spĂ©cialisĂ©es implique toutefois les mĂȘmes inconvĂ©nients que ceux rencontrĂ©s dans la construction des descriptions proposĂ©es par les grandes initiatives de mĂ©tadonnĂ©es. Nous pensons qu'il est toutefois possible de rĂ©aliser des ontologies consensuelles sans nĂ©cessairement impliquer une Ă©quipe de conception spĂ©cialisĂ©e. Sur la base de notre propre expĂ©rience, nous avons constatĂ© que lorsqu'un concepteur de cours rĂ©cupĂšre un contenu dĂ©jĂ  annotĂ©, celui-ci se montre aussi gĂ©nĂ©ralement intĂ©ressĂ© Ă  conserver la valeur des annotations rĂ©cupĂ©rĂ©es. Ces mĂȘmes annotations sont aussi souvent rĂ©utilisĂ©es de nouveau pour rĂ©aliser des descriptions supplĂ©mentaires. Nous pensons donc qu'il est ainsi possible de favoriser la construction d'ontologies en permettant simplement Ă  des concepteurs de cours d'Ă©changer librement des contenus annotĂ©s entre eux tout en permettant Ă  chacun de rajouter/retrancher les descriptions sĂ©mantiques rattachĂ©es aux annotations rĂ©cupĂ©rĂ©es. Nous croyons que les Ă©lĂ©ments d'ontologies rĂ©cupĂ©rĂ©s par chacun seront ainsi systĂ©matiquement rĂ©utilisĂ©s pour favoriser la construction d'ontologies de plus en plus importantes. Nous faisons ainsi l'hypothĂšse que les emprunts d'annotations rĂ©alisĂ©s successivement par diffĂ©rents concepteurs de cours se traduisent toujours par un bilan positif entre les ajouts et les retraits rĂ©alisĂ©s par chacun d'eux, rĂ©alisant ainsi un effet de levier positif sur la production globale des annotations. Autrement dit, nous croyons que le nombre de descriptions augmente au fur et Ă  mesure de l'implication d'un nouvel intervenant dans une chaĂźne de partage. Pour vĂ©rifier cette hypothĂšse, nous avons rĂ©alisĂ© une expĂ©rience avec huit sujets pour vĂ©rifier le taux de rĂ©utilisation des annotations et le taux de rĂ©utilisation des classes d'ontologie associĂ©es Ă  ces mĂȘmes annotations au travers des Ă©changes successifs de contenus entre concepteurs de cours. Nous avons construit un prototype logiciel qui permet la construction et l'Ă©change d'annotations RDF associĂ©es Ă  des ontologies OWL (Web Ontology Language). Les huit sujets avaient pour consigne d'Ă©changer entre eux des contenus de cours et de modifier, si nĂ©cessaire, les annotations dĂ©jĂ  rĂ©alisĂ©es par les autres. Les actions des diffĂ©rents utilisateurs Ă©taient enregistrĂ©es par le logiciel. En Ă©tudiant le fichier de journalisation gĂ©nĂ©rĂ© par le logiciel, nous avons dĂ©montrĂ© que le taux de rĂ©utilisation des annotations est de 88% alors que celui des classes d'ontologies Ă©changĂ©es atteint 99%. Nous avons ainsi dĂ©couvert l'existence d'un effet de levier important dans la conception de contenus annotĂ©s qui facilitera certainement la mise en place dĂ©finitive du Web sĂ©mantique. Les gains de cette dĂ©couverte sont nombreux : notamment de ne plus ĂȘtre dĂ©pendant d'Ă©quipes spĂ©cialisĂ©es pour la production d'ontologies consensuelles, de rĂ©duire substantiellement la nĂ©cessitĂ© d'avoir Ă  recourir Ă  des techniques complexes d'alignement d'ontologies et de favoriser la capture des connaissances directement au niveau des concepteurs de contenu. CONTENU Les initiatives de mĂ©tadonnĂ©es -- Initiatives de collaboration -- MĂ©tadonnĂ©es pĂ©dagogiques -- ProblĂšmes de la personnalisation des structures de mĂ©tadonnĂ©es -- Les annotations -- Annotations Ă©lectroniques -- CatĂ©gorisation -- Les ontologies -- Le Web sĂ©mantique -- Resource description framework (RDF) -- OWL -- Architecture -- ProblĂšme de description des contenus -- Structure de description des mĂ©tadonnĂ©es -- Annotations manuelles -- HĂ©tĂ©rogĂ©nĂ©itĂ© des descriptions RDF -- Alignement des ontologies -- Acteurs -- MĂ©thodologie Prototype logiciel -- Analyse des rĂ©sultats -- DurĂ©e -- RĂ©utilisation des annotations -- Ontologies produites -- RĂ©utilisation des ontologies -- Annogramme -- FacilitĂ© d'utilisation -- Comportement et attitude -- Contribution Ă  l'avancement des connaissances

    Exploiting general-purpose background knowledge for automated schema matching

    Full text link
    The schema matching task is an integral part of the data integration process. It is usually the first step in integrating data. Schema matching is typically very complex and time-consuming. It is, therefore, to the largest part, carried out by humans. One reason for the low amount of automation is the fact that schemas are often defined with deep background knowledge that is not itself present within the schemas. Overcoming the problem of missing background knowledge is a core challenge in automating the data integration process. In this dissertation, the task of matching semantic models, so-called ontologies, with the help of external background knowledge is investigated in-depth in Part I. Throughout this thesis, the focus lies on large, general-purpose resources since domain-specific resources are rarely available for most domains. Besides new knowledge resources, this thesis also explores new strategies to exploit such resources. A technical base for the development and comparison of matching systems is presented in Part II. The framework introduced here allows for simple and modularized matcher development (with background knowledge sources) and for extensive evaluations of matching systems. One of the largest structured sources for general-purpose background knowledge are knowledge graphs which have grown significantly in size in recent years. However, exploiting such graphs is not trivial. In Part III, knowledge graph em- beddings are explored, analyzed, and compared. Multiple improvements to existing approaches are presented. In Part IV, numerous concrete matching systems which exploit general-purpose background knowledge are presented. Furthermore, exploitation strategies and resources are analyzed and compared. This dissertation closes with a perspective on real-world applications

    Ontology mapping and merging through OntoDNA for learning object reusability

    No full text
    The issue of structural and semantic interoperability among learning objects and other resources on the Internet is increasingly pointing towards Semantic Web technologies in general and ontology in particular as a solution provider. Ontology defines an explicit formal specification of domains to learning objects. However, the effectiveness to interoperate learning objects among various learning object repositories are often reduced due to the use of different ontological schemes to annotate learning objects in each learning object repository. Hence, structural differences and semantic heterogeneity between ontologies need to be resolved in order to generate shared ontology to facilitate learning object reusability. This paper presents OntoDNA, an automated ontology mapping and merging tool. Significance of the study lies in an algorithmic framework for mapping the attributes of concepts/ learning objects and merging these concepts/ learning objects from different ontologies based on the mapped attributes; identification of a suitable threshold value for mapping and merging; an easily scalable unsupervised data mining algorithm for modeling existing concepts and predicting the cluster to which a new concept/ learning object should belong, easy indexing, retrieval and visualization of concepts and learning objects based on the merged ontology
    corecore