13 research outputs found

    Multimedia Markup Tools for OpenKnowledge

    No full text
    OpenKnowledge is a peer-to-peer system for sharing knowledge and is driven by interaction models that give the necessary context for mapping of ontological knowledge fragments necessary for the interaction to take place. The OpenKnowledge system is agnostic to any specific data formats that are used in the interactions, relying on ontology mapping techniques for shimming the messages. The potentially large search space for matching ontologies is reduced by the shared context of the interaction. In this paper we investigate what this means for multimedia data on the OpenKnowledge network by discussing how an existing application that provides multimedia annotation (the Semantic Logger) can be migrated into the OpenKnowledge domain

    Benchmarking Bottom-Up and Top-Down Strategies to Sparql-To-Sql Query Translation

    Get PDF
    Many researchers have proposed using conventional relational databases to store and query large Semantic Web datasets. The most complex component of this approach is SPARQL-to-SQL query translation. Existing algorithms perform this translation using either bottom-up or top-down strategy and result in semantically equivalent but syntactically different relational queries. Do relational query optimizers always produce identical query execution plans for semantically equivalent bottom-up and top-down queries? Which of the two strategies yields faster SQL queries? To address these questions, this work studies bottom-up and top-down translations of SPARQL queries with nested optional graph patterns. This work presents: (1) A basic graph pattern translation algorithm that yields flat SQL queries, (2) A bottom-up nested optional graph pattern translation algorithm, (3) A top-down nested optional graph pattern translation algorithm, and (4) A performance study featuring SPARQL queries with nested optional graph patterns over RDF databases created in Oracle, DB2, and PostgreSQL

    Le web sémantique en aide à l'analyste de traces d'exécution

    Get PDF
    International audienceL'analyse de traces d' exĂ©cution est devenue l'outil priv-ilĂ©giĂ© pour dĂ©bugger et optimiser le code des applications sur les syst emes embarquĂ©s. Ces syst emes ont des architec-tures complexes basĂ©es sur des composants intĂ©grĂ©s appelĂ©s SoC (System-on-Chip). Le travail de l'analyste (souvent, un dĂ©veloppeur d'application) devient un vĂ©ritable challenge car les traces produites par ces syst emes sont de tr es grande taille et les ev enements qu'ils contiennent sont de bas niveau. Nous proposons d'aider ce travail d'analyse en utilisant des outils de gestion des connaissances pour faciliter l'explo-ration de la trace. Nous proposons une ontologie du do-maine qui dĂ©crit les principaux concepts et contraintes pour l'analyse de traces issues de SoC. Cette ontologie reprend les paradigmes d'ontologie lĂ©g ere pour supporter le passagĂš a l'ÂŽ echelle de la gestion des connaissances. Elle utilise des technologies de " triple store " RDF pour son exploitation a l'aide de requĂȘtes dĂ©claratives SPARQL. Nous illustrons notre ap-proche en offrant une analyse de meilleure qualitĂ© des traces d'un cas d'utilisation rĂ©el

    An ontology-based approach to Automatic Generation of GUI for Data Entry

    Get PDF
    This thesis reports an ontology-based approach to automatic generation of highly tailored GUI components that can make customized data requests for the end users. Using this GUI generator, without knowing any programming skill a domain expert can browse the data schema through the ontology file of his/her own field, choose attribute fields according to business\u27s needs, and make a highly customized GUI for end users\u27 data requests input. The interface for the domain expert is a tree view structure that shows not only the domain taxonomy categories but also the relationships between classes. By clicking the checkbox associated with each class, the expert indicates his/her choice of the needed information. These choices are stored in a metadata document in XML. From the viewpoint of programmers, the metadata contains no ambiguity; every class in an ontology is unique. The utilizations of the metadata can be various; I have carried out the process of GUI generation. Since every class and every attribute in the class has been formally specified in the ontology, generating GUI is automatic. This approach has been applied to a use case scenario in meteorological and oceanographic (METOC) area. The resulting features of this prototype have been reported in this thesis

    The Application of Advanced Knowledge Technologies for Emergency Reponse

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.Making sense of the current state of an emergency and of the response to it is vital if appropriate decisions are to be made. This task involves the acquisition, interpretation and management of information. In this paper we present an integrated system that applies recent ideas and technologies from the fields of Artificial Intelligence and semantic web research to support sense- and decision-making at the tactical response level, and demonstrate it with reference to a hypothetical large-scale emergency scenario. We offer no end-user evaluation of this system; rather, we intend that it should serve as a visionary demonstration of the potential of these technologies for emergency response

    ThesaurusAPI: una API para la manipulaciĂłn de tesauros

    Get PDF
    ThesaurusAPI is an API for managing thesauri. Internally, thesauri are represented with SKOS, and RDF stores are used to store them. The functionality and integrity management of the API is discussed, and the advantages and disadvantages of using SKOS are commented. This API has been specially created for thesauri management, which is a significant difference with other APIs, SKOS-oriented. It is intended for use from applications that manage thesauri stored in the same server in which the application runs. A distribution is available for free download.ThesaurusAPI es una API para la manipulaciĂłn de tesauros. A nivel interno, los tesauros se representan con SKOS, y se utilizan almacenes RDF para su almacenamiento. En este artĂ­culo se presenta la API y se comentan algunas caracterĂ­sticas: funcionalidad, tratamiento de la integridad, uso de SKOS, asĂ­ como las ventajas e inconvenientes de utilizar SKOS. La API estĂĄ orientada especĂ­ficamente a la manipulaciĂłn de tesauros, lo cual la diferencia de las API orientadas a SKOS. Se trata de una API pensada para facilitar el desarrollo de aplicaciones que manipulan tesauros que pueden estar almacenados localmente. Una implementaciĂłn de libre distribuciĂłn estĂĄ disponible

    S2ST: A Relational RDF Database Management System

    Get PDF
    The explosive growth of RDF data on the Semantic Web drives the need for novel database systems that can efficiently store and query large RDF datasets. To achieve good performance and scalability of query processing, most existing RDF storage systems use a relational database management system as a backend to manage RDF data. In this paper, we describe the design and implementation of a Relational RDF Database Management System. Our main research contributions are: (1) We propose a formal model of a Relational RDF Database Management System (RRDBMS), (2) We propose generic algorithms for schema, data and query mapping, (3) We implement the first and only RRDBMS, S2ST, that supports multiple relational database management systems, user-customizable schema mapping, schema-independent data mapping, and semantics-preserving query translation

    A Nine Month Progress Report on an Investigation into Mechanisms for Improving Triple Store Performance

    No full text
    This report considers the requirement for fast, efficient, and scalable triple stores as part of the effort to produce the Semantic Web. It summarises relevant information in the major background field of Database Management Systems (DBMS), and provides an overview of the techniques currently in use amongst the triple store community. The report concludes that for individuals and organisations to be willing to provide large amounts of information as openly-accessible nodes on the Semantic Web, storage and querying of the data must be cheaper and faster than it is currently. Experiences from the DBMS field can be used to maximise triple store performance, and suggestions are provided for lines of investigation in areas of storage, indexing, and query optimisation. Finally, work packages are provided describing expected timetables for further study of these topics

    SPARQL query processing with conventional relational database systems

    No full text
    This paper describes an evolution of the 3store RDF storage system, extended to provide a SPARQL query interface and informed by lessons learned in the area of scalable RDF storage

    SPARQL Query Processing with Conventional Relational Database Systems

    No full text
    corecore