20 research outputs found

    Learning from Digital Library Evaluations

    Get PDF
    In this paper we analyse evaluation studies of the Europeana digital library from its launch in 2009 until today. Using Saracevic’s digital library evaluation framework, the studies are categorised by their constructs, contexts, criteria, and methodologies. Concentrating on studies that evaluate Europeana services or single components, we show gaps in the evaluation of certain Europeana aspects. Finally, we derive strategies for building an evaluation archive that serves as memory and supports comparisons.Im vorliegenden Artikel analysieren wir Evaluationsstudien der digitalen Bibliothek Europeana von 2009 bis heute. Unter Berücksichtigung von Saracevic’ Evaluationsframework für digitale Bibliotheken werden die Studien nach ihren Konstrukten, Kontexten, Kriterien und Methodologien kategorisiert. Die Analyse konzentriert sich auf Studien, die Dienstleistungen oder einzelne Komponenten von Europeana evaluieren, und zeigt Lücken in der Evaluation bestimmter Aspekte von Europeana auf. Schließlich werden Strategien diskutiert, um ein Evaluationsarchiv zu entwickeln, welches sowohl der Langzeitarchivierung dient als auch Vergleiche von Evaluationsergebnissen unterstützt.Peer Reviewe

    User Interfaces to the Web of Data based on Natural Language Generation

    Get PDF
    We explore how Virtual Research Environments based on Semantic Web technologies support research interactions with RDF data in various stages of corpus-based analysis, analyze the Web of Data in terms of human readability, derive labels from variables in SPARQL queries, apply Natural Language Generation to improve user interfaces to the Web of Data by verbalizing SPARQL queries and RDF graphs, and present a method to automatically induce RDF graph verbalization templates via distant supervision

    Metadata and semantics research : 8th research conference, MTSR 2014, Karlsruhe, Germany, november 27-29, 2014 : proceedings

    No full text
    Based on the concept of an application profile as proposed by the Dublin Core initiative, the work presented in this manuscript attempts to propose an application profile for the Earth Observation images. This approach aims to provide an open and extensible model facilitating the sharing and management of distributed images within decentralized architectures. It is intended to eventually cover the needs of discovery, localization, consulting, preservation and processing of data for decision support. We are using the Singapore framework recommendations to build the application profile. A particular focus on the formalization and representation of Description Set Profile (DSP) in RDF is proposed

    An empirically-based framework for ontology modularization

    Get PDF
    Modularity is being increasingly used as an approach to solve for the information overload problem in ontologies. It eases cognitive complexity for humans, and computational complexity for machines. The current literature for modularity focuses mainly on techniques, tools, and on evaluation metrics. However, ontology developers still face difficulty in selecting the correct technique for specific applications and the current tools for modularity are not sufficient. These issues stem from a lack of theory about the modularisation process. To solve this problem, several researchers propose a framework for modularity, but alas, this has not been realised, up until now. In this article, we survey the existing literature to identify and populate dimensions of modules, experimentally evaluate and characterise 189 existing modules, and create a framework for modularity based on these results. The framework guides the ontology developer throughout the modularisation process. We evaluate the framework with a use-case for the Symptom ontology

    Un modèle pour l'intégration sémantique de données géolocalisées d'observation de la Terre

    Get PDF
    Le domaine de l'observation de la terre est en forte évolution. L'Agence Spatiale Européenne a récemment lancé les satellites Sentinel qui livrent entre 8 à 10 To de données par jour, ce qui ouvre de nouvelles opportunités d'applications pour étudier l'environnement, l'urbanisme, l'océan, le climat, etc. Afin de mieux exploiter ou retrouver ces données d'observation, ces applications nécessitent de leur associer des données provenant de différentes sources. Un des défis à relever est alors d'intégrer ces données malgré leur hétérogénéité. Les technologies du web sémantique apportent une solution en fournissant une infrastructure basée sur RDF et des ontologies. Dans cet article, nous présentons une approche pour enrichir les méta-données habituelles des images satellites avec des données externes (e.g. les mesures de températures enregistrées pour la région représentée sur l'image). Nous proposons un vocabulaire sémantique et une formalisation des relations spatio-temporelles support au processus l'intégration des différentes données géolocalisées associées à des observations de la Terre

    An ontology based approach in health information systems: Blood test ontology example

    Get PDF
    <span>Health domain is a complex and distributed research area, where different institutions and people take and provide service, at the same time. Therefore, the health data about a patient is completely distributed among doctors, clinics, hospitals, pharmacies and insurance companies. To share and reuse the distributed, well-structured and semantically rich clinical data with the appropriate permissions from anywhere is one of the major areas that the research of information systems focused in healthcare domain in recent years. The semantic web provides a technological infrastructure with representing the meaning of data and reasoning new information from the existing knowledge for the healthcare domain. The blood, as the life fluid, gives hints to the clinicians about a patient's general health status by analyzing the ingredients in. The results of blood tests contain lots of information that can be used by different clinics. In the diagnostic phase, analyzing the blood for the same tests repeatedly delays to start the treatment process and increases the cost. The Blood Test Ontology is developed to model the blood tests semantically that is done in the health field and also to define information related with the blood and the blood tests as well as the relationships between them. The ontology in this work is developed with the aim to be used in the health information system, which should provide the querying, sharing and reusing the personalized the blood test result of the patients, as a knowledge base. The Blood Test Ontology is supported by the medical information standards to be able to interoperable with the other medical ontologies that are developed in the health.</span

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques

    Semantic Systems. The Power of AI and Knowledge Graphs

    Get PDF
    This open access book constitutes the refereed proceedings of the 15th International Conference on Semantic Systems, SEMANTiCS 2019, held in Karlsruhe, Germany, in September 2019. The 20 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 88 submissions. They cover topics such as: web semantics and linked (open) data; machine learning and deep learning techniques; semantic information management and knowledge integration; terminology, thesaurus and ontology management; data mining and knowledge discovery; semantics in blockchain and distributed ledger technologies
    corecore