92 research outputs found

    Towards Semi-automatic Generation of R2R Mappings

    Get PDF
    Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping pattern

    Towards Semi-automatic Generation of R2R Mappings

    Get PDF
    Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping patterns

    Generating SPARQL Executable Mappings to Integrate Ontologies

    Get PDF
    Data translation is an integration task that aims at populat- ing a target model with data of a source model by means of mappings. Generating them automatically is appealing insofar it may reduce inte- gration costs. Matching techniques automatically generate uninterpreted mappings, a.k.a. correspondences, that must be interpreted to perform the data translation task. Other techniques automatically generate ex- ecutable mappings, which encode an interpretation of these correspon- dences in a given query language. Unfortunately, current techniques to automatically generate executable mappings are based on instance ex- amples of the target model, which usually contains no data, or based on nested relational models, which cannot be straightforwardly applied to semantic-web ontologies. In this paper, we present a technique to auto- matically generate SPARQL executable mappings between OWL ontolo- gies. The original contributions of our technique are as follows: 1) it is not based on instance examples but on restrictions and correspondences, 2) we have devised an algorithm to make restrictions and correspondences explicit over a number of language-independent executable mappings, and 3) we have devised an algorithm to transform language-independent into SPARQL executable mappings. Finally, we evaluate our technique over ten scenarios and check that the interpretation of correspondences that it assumes is coherent with the expected results.Ministerio de Educación y Ciencia TIN2007-64119Junta de Andalucía P07-TIC-2602Junta de Andalucía P08-TIC-4100Ministerio de Ciencia e Innovación TIN2008-04718-EMinisterio de Ciencia e Innovación TIN2010-09809-EMinisterio de Ciencia e Innovación TIN2010-10811-EMinisterio de Ciencia e Innovación TIN2010-09988-

    Linked Data based Health Information Representation, Visualization and Retrieval System on the Semantic Web

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.To better facilitate health information dissemination, using flexible ways to represent, query and visualize health data becomes increasingly important. Semantic Web technologies, which provide a common framework by allowing data to be shared and reused between applications, can be applied to the management of health data. Linked open data - a new semantic web standard to publish and link heterogonous data- allows not only human, but also machine to brows data in unlimited way. Through a use case of world health organization HIV data of sub Saharan Africa - which is severely affected by HIV epidemic, this thesis built a linked data based health information representation, querying and visualization system. All the data was represented with RDF, by interlinking it with other related datasets, which are already on the cloud. Over all, the system have more than 21,000 triples with a SPARQL endpoint; where users can download and use the data and – a SPARQL query interface where users can put different type of query and retrieve the result. Additionally, It has also a visualization interface where users can visualize the SPARQL result with a tool of their preference. For users who are not familiar with SPARQL queries, they can use the linked data search engine interface to search and browse the data. From this system we can depict that current linked open data technologies have a big potential to represent heterogonous health data in a flexible and reusable manner and they can serve in intelligent queries, which can support decision-making. However, in order to get the best from these technologies, improvements are needed both at the level of triple stores performance and domain-specific ontological vocabularies

    Linked Open Data - Creating Knowledge Out of Interlinked Data: Results of the LOD2 Project

    Get PDF
    Database Management; Artificial Intelligence (incl. Robotics); Information Systems and Communication Servic

    VooDooM : support for understanding and re-engineering of VDM-SL specifications

    Get PDF
    Tese mestrado informáticaThe main purpose of this work is to define steady ground for supporting the understanding and re-engineering of VDM-SL specifications. Understanding and re-engineering are justified by Lehman’s laws of software evolution which state, for instance, that systems must be continually adapted and as a program evolves its complexity increases unless specific work is done to reduce it. This thesis reports the implementation of understanding and re-enginering techniques in a tool called VooDooM, which was built in three well defined steps. First, development of the language front-end to recognize the VDMSL language, using a grammar-centered approach, supported by the SDF formalism, in which a wide variety of components are automatically generated from a single grammar; Second, development of understanding support, in which graphs are extracted and derived and subsequently used as input to strongly-connected components, formal concept analysis and metrication. Last, development of re-engineering support, through the development of a relational calculator that transforms a formal specification into an equivalent model which can be translated to SQL. In all steps of the work we thoroughly document the path from theory to practice and we conclude by reporting successful results obtained in two test cases.O objectivo principal deste trabalho é a definiçãoo de uma infra-estrutura para suportar compreensão e re-engenharia de especificações escritas em VDM-SL. compreensão e re-engenharia justificam-se pelas leis de evolução do software. Estas leis, formuladas por Lehman, definem, por exemplo, que um qualquer sistema deve ser continuamente adaptado e `a medida que os programas evoluem a sua complexidade tende sempre a aumentar. Esta tese descreve o estudo de técnicas de compreensão e re-engenharia que foram implementadas numa ferramenta chamada VooDooM. Esta implementação foi efectuada em três etapas bem definidas. Primeiro, foi desenvolvido um parser (front-end) para reconhecer a linguagem VDM-SL. Para tal, foi utilizada uma abordagem centrada na gramática, suportada no formalismo SDF, que está equipado com ferramentas de geração automática de diversos componentes. Segundo, para o suporte de compreensão, foram desenvolvidas funcionalidades para extrair e derivar grafos que são utilizados em técnicas de análise como componentes fortemente relacionados, análise de conceitos (formal concept analysis) e métricas. Por último, para o suporte de re-engenharia, foi prototipada uma calculadora relacional que transforma um modelo, definido numa especificação formal, no seu equivalente relacional que pode ser traduzido para SQL. Em todas as etapas realizadas h a preocupação de documentar o percurso entre teoria para a prática. A análise de resultados obtida no estudo de caso revela o sucesso da abordagem e as suas potencialidades para desenvolvimentos futuros

    Adjustable Robust Two-Stage Polynomial Optimization with Application to AC Optimal Power Flow

    Full text link
    In this work, we consider two-stage polynomial optimization problems under uncertainty. In the first stage, one needs to decide upon the values of a subset of optimization variables (control variables). In the second stage, the uncertainty is revealed and the rest of optimization variables (state variables) are set up as a solution to a known system of possibly non-linear equations. This type of problem occurs, for instance, in optimization for dynamical systems, such as electric power systems. We combine tools from polynomial and robust optimization to provide a framework for general adjustable robust polynomial optimization problems. In particular, we propose an iterative algorithm to build a sequence of (approximately) robustly feasible solutions with an improving objective value and verify robust feasibility or infeasibility of the resulting solution under a semialgebraic uncertainty set. At each iteration, the algorithm optimizes over a subset of the feasible set and uses affine approximations of the second-stage equations while preserving the non-linearity of other constraints. The algorithm allows for additional simplifications in case of possibly non-convex quadratic problems under ellipsoidal uncertainty. We implement our approach for AC Optimal Power Flow and demonstrate the performance of our proposed method on Matpower instances.Comment: 28 pages, 3 table

    Geração semi-automática de mapeamentos de vocabulários entre datasets da web de dados usando SPARQL

    Get PDF
    Atualmente, a web tem evoluído de um espaço global de documentos interligados ( Web de Documentos) para um espaço global de dados vinculados ( Web de Dados), de modo a que tanto os humanos como os agentes computacionais consigam compreender e extrair informações úteis desses dados. No entanto, para que seja possível possuir um dia uma Web de Dados, é necessário, em primeiro lugar, dar semântica aos dados. Neste sentido, emergiu uma nova abordagem, designada por Web Semântica, cujo principal objetivo é facilitar a interpretação e integração de dados na web. Na Web Semântica, utilizamos habitualmente as ontologias para descrever formalmente a semântica dos dados. No entanto, à medida que o número de ontologias vai aumentado, é bastante comum existir heterogeneidade entre elas, já que cada ontologia pode usar vocabulários diferentes para representar dados acerca de uma mesma área de conhecimento. Esta heterogeneidade impossibilita a recuperação de informações por parte dos agentes computacionais sem que haja intervenção humana. Para fazer face aos problemas relacionados com a heterogeneidade, é muito comum efetuar-se mapeamentos entre as ontologias. Existem diversas linguagens no mercado que permitem traduzir e mapear ontologias, dentro as quais destacamos a linguagem SPARQL Protocol and RDF Query Language (SPARQL 1.1) 1 e R2R 2 . Neste trabalho decidimos usar o SPARQL 1.1 como linguagem de mapeamento entre ontologias, pois é um padrão recomendado pelo World Wide Web Consortium (W3C) e amplamente utilizado pela comunidade. Como esta linguagem é complexa e necessita que o utilizador tenha experiência na definição e criação de mapeamentos, propomos uma ferramenta, chamada SPARQL Mapping with Assertions (SMA), que visa auxiliar os utilizadores no processo de geração de mapeamentos SPARQL 1.1 entre ontologias. A ferramenta SMA é constituída por quatro partes: (1) configuração inicial das ontologias: o utilizador indica quais as ontologias que deseja mapear, assim como a linguagem em que os ficheiros das mesmas estão escritos; (2) criação das Assertivas de Mapeamento (AMs): através da interface gráfica, o utilizador especifica quais os mapeamentos que deseja definir, incluindo possíveis transformações ou filtros que sejam necessários aplicar aos dados;(3) configuração para a geração de mapeamentos: o utilizador introduz o ficheiro com o Dataset da ontologia fonte e identifica a linguagem de serialização em que o mesmo está escrito. Além disso, também escolhe qual a linguagem de serialização que deseja aquando da geração de triplos; (4) geração de triplos através dos mapeamentos SPARQL 1.1: a partir dos pontos anteriores, a nossa ferramenta irá retornar um ficheiro com todos os resultados na linguagem de serialização escolhida. A nossa ferramenta permite ainda exportar todos os mapeamentos criados, quer seja através das linguagens formais (assertivas ou regras de mapeamentos) ou dos mapeamentos SPARQL 1.1
    corecore