464 research outputs found

    Active Ontology: An Information Integration Approach for Dynamic Information Sources

    Get PDF
    In this paper we describe an ontology-based information integration approach that is suitable for highly dynamic distributed information sources, such as those available in Grid systems. The main challenges addressed are: 1) information changes frequently and information requests have to be answered quickly in order to provide up-to-date information; and 2) the most suitable information sources have to be selected from a set of different distributed ones that can provide the information needed. To deal with the first challenge we use an information cache that works with an update-on-demand policy. To deal with the second we add an information source selection step to the usual architecture used for ontology-based information integration. To illustrate our approach, we have developed an information service that aggregates metadata available in hundreds of information services of the EGEE Grid infrastructure

    An ActOn-based Semantic Information Service for EGEE

    Get PDF
    We describe a semantic information service that aggregates metadata from a large number of information sources of a large-scale Grid infrastructure. It uses an ontology-based information integration architecture (ActOn) suitable for the highly dynamic distributed information sources available in Grid systems, where information changes frequently and where the information of distributed sources has to be aggregated in order to solve complex queries. These two challenges are addressed by a Metadata Cache that works with an update-on-demand policy and by an information source selection module that selects the most suitable source at a given point in time. We have evaluated the quality of this information service, and compared it with other similar services from the EGEE production testbed, with promising results

    ActOn: A Semantic Information Service for EGEE

    Full text link
    We describe an information service that aggregates metadata available in hundreds of information sources of the EGEE Grid infrastructure. It uses an ontology-based information integration architecture (ActOn), which is suitable the highly dynamic distributed information sources available in Grid systems, where information changes frequently and where the information of distributed sources has to be aggregated in order to solve complex queries. These two challenges are addressed by a metadata cache that works with an update-on-demand policy and by an information source selection module that selects the most suitable source at a given point in time, respectively. We have evaluated the quality of this information service, and compared it with other similar services from the EGEE production testbed, with promising result

    Desing and Validation of a Light Inference System to Support Embedded Context Reasoning

    Full text link
    Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications—it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ‘Activity Monitor’ has been designed and implemented: a personal health-persuasive application that provides feedback on the user’s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user’s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.

    Investigating semantic similarity for biomedical ontology alignment

    Get PDF
    Tese de mestrado, Bioinformática e Biologia Computacional (Bioinformática) Universidade de Lisboa, Faculdade de Ciências, 2017A heterogeneidade dos dados biomédicos e o crescimento exponencial da informação dentro desse domínio tem levado à utilização de ontologias, que codificam o conhecimento de forma computacionalmente tratável. O desenvolvimento de uma ontologia decorre, em geral, com base nos requisitos da equipa que a desenvolve, podendo levar à criação de ontologias diferentes e potencialmente incompatíveis por várias equipas de investigação. Isto implica que as várias ontologias existentes para codificar conhecimento biomédico possam, entre elas, sofrer de heterogeneidade: mesmo quando o domínio por elas codificado é idêntico, os conceitos podem ser representados de formas diferentes, com diferente especificidade e/ou granularidade. Para minimizar estas diferenças e criar representações mais standard e aceites pela comunidade, foram desenvolvidos algoritmos (matchers) que encontrassem pontes de conhecimento (mappings) entre as ontologias de forma a alinharem-nas. O tipo de algoritmos mais utilizados no Alinhamento de Ontologias (AO) são os que utilizam a informação léxica (isto é, os nomes, sinónimos e descrições dos conceitos) para calcular as semelhanças entre os conceitos a serem mapeados. Uma abordagem complementar a esses algoritmos é a utilização de Background Knowledge (BK) como forma de aumentar o número de sinónimos usados e assim aumentar a cobertura do alinhamento produzido. Uma alternativa aos algoritmos léxicos são os algoritmos estruturais que partem do pressuposto que as ontologias foram desenvolvidas com pontos de vista semelhantes – realidade pouco comum. Surge então o tema desta dissertação onde toma-se partido da Semelhança Semântica (SS) para o desenvolvimento de novos algoritmos de AO. É de salientar que até ao momento a utilização de SS no Alinhamento de Ontologias é cingida à verificação de mappings e não à sua procura. Esta dissertação apresenta o desenvolvimento, implementação e avaliação de dois algoritmos que utilizam SS, ambos usados como forma de estender alinhamentos produzidos previamente, um para encontrar mappings de equivalências e o outro de subsunção (onde um conceito de uma ontologia é mapeado como sendo descendente do conceito proveniente de outra ontologia). Os algoritmos propostos foram implementados no AML que é um sistema topo de gama em Alinhamento de Ontologias. O algoritmo de equivalência demonstrou uma melhoria de até 0.2% em termos de F-measure em comparação com o alinhamento âncora utilizado; e um aumento de até 11.3% quando comparado a outro sistema topo de gama (LogMapLt) que não utiliza BK. É importante referir que, dentro do espaço de procura do algoritmo o Recall variou entre 66.7% e 100%. Já o algoritmo de subsunção apresentou precisão entre 75.9% e 95% (avaliado manualmente).The heterogeneity of biomedical data and the exponential growth of the information within this domain has led to the usage of ontologies, which encode knowledge in a computationally tractable way. Usually, the ontology’s development is based on the requirements of the research team, which means that ontologies of the same domain can be different and potentially incompatible among several research teams. This fact implies that the various existing ontologies encoding biomedical knowledge can, among them, suffer from heterogeneity: even when the encoded domain is identical, the concepts may be represented in different ways, with different specificity and/or granularity. To minimize these differences and to create representations that are more standard and accepted by the community, algorithms (known as matchers) were developed to search for bridges of knowledge (known as mappings) between the ontologies, in order to align them. The most commonly used type of matchers in Ontology Matching (OM) are the ones taking advantage of the lexical information (names, synonyms and textual description of the concepts) to calculate the similarities between the concepts to be mapped. A complementary approach to those algorithms is the usage of Background Knowledge (BK) as a way to increase the number of synonyms used, and further increase of the coverage of the produced alignment. An alternative to lexical algorithms are the structural ones which assume that the ontologies were developed with similar points of view - an unusual reality. The theme of this dissertation is to take advantage of Semantic Similarity (SS) for the development of new OM algorithms. It is important to emphasize that the use of SS in Ontology Alignment has, until now, been limited to the verification of mappings and not to its search. This dissertation presents the development, implementation, and evaluation of two algorithms that use SS. Both algorithms were used to extend previously produced alignments, one to search for equivalence and the other for subsumption mappings (where a concept of an ontology is mapped as descendant from a concept from another ontology). The proposed algorithms were implemented in AML, which is a top performing system in Ontology Matching. The equivalence algorithm showed an improvement in F-measure up to 0.2% when compared to the anchor alignment; and an increase of up to 11.3% when compared to another high-end system (LogMapLt) which lacks the usage of BK. It is important to note that, within the search space of the algorithm, the Recall ranged from 66.7% to 100%. On the other hand, the subsumption algorithm presented an accuracy between 75.9% and 95% (manually evaluated)

    Container description ontology for CaaS

    Full text link
    [EN] Besides its classical three service models (IaaS, PaaS, and SaaS), container as a service (CaaS) has gained significant acceptance. It offers without the difficulty of high-performance challenges of traditional hypervisors deployable applications. As the adoption of containers is increasingly wide spreading, the use of tools to manage them across the infrastructure becomes a vital necessity. In this paper, we propose a conceptualisation of a domain ontology for the container description called CDO. CDO presents, in a detailed and equal manner, the functional and non-functional capabilities of containers, Dockers and container orchestration systems. In addition, we provide a framework that aims at simplifying the container management not only for the users but also for the cloud providers. In fact, this framework serves to populate CDO, help the users to deploy their application on a container orchestration system, and enhance interoperability between the cloud providers by providing migration service for deploying applications among different host platforms. Finally, the CDO effectiveness is demonstrated relying on a real case study on the deployment of a micro-service application over a containerised environment under a set of functional and non-functional requirements.K. Boukadi; M.a Rekik; J. Bernal Bernabe; Lloret, J. (2020). Container description ontology for CaaS. International Journal of Web and Grid Services (Online). 16(4):341-363. https://doi.org/10.1504/IJWGS.2020.11094434136316

    DIN Spec 91345 RAMI 4.0 compliant data pipelining: An approach to support data understanding and data acquisition in smart manufacturing environments

    Get PDF
    Today, data scientists in the manufacturing domain are confronted with a set of challenges associated to data acquisition as well as data processing including the extraction of valuable in-formation to support both, the work of the manufacturing equipment as well as the manufacturing processes behind it. One essential aspect related to data acquisition is the pipelining, including various commu-nication standards, protocols and technologies to save and transfer heterogenous data. These circumstances make it hard to understand, find, access and extract data from the sources depend-ing on use cases and applications. In order to support this data pipelining process, this thesis proposes the use of the semantic model. The selected semantic model should be able to describe smart manufacturing assets them-selves as well as to access their data along their life-cycle. As a matter of fact, there are many research contributions in smart manufacturing, which already came out with reference architectures or standards for semantic-based meta data descrip-tion or asset classification. This research builds upon these outcomes and introduces a novel se-mantic model-based data pipelining approach using as a basis the Reference Architecture Model for Industry 4.0 (RAMI 4.0).Hoje em dia, os cientistas de dados no domínio da manufatura são confrontados com várias normas, protocolos e tecnologias de comunicação para gravar, processar e transferir vários tipos de dados. Estas circunstâncias tornam difícil compreender, encontrar, aceder e extrair dados necessários para aplicações dependentes de casos de utilização, desde os equipamentos aos respectivos processos de manufatura. Um aspecto essencial poderia ser um processo de canalisação de dados incluindo vários normas de comunicação, protocolos e tecnologias para gravar e transferir dados. Uma solução para suporte deste processo, proposto por esta tese, é a aplicação de um modelo semântico que descreva os próprios recursos de manufactura inteligente e o acesso aos seus dados ao longo do seu ciclo de vida. Muitas das contribuições de investigação em manufatura inteligente já produziram arquitecturas de referência como a RAMI 4.0 ou normas para a descrição semântica de meta dados ou classificação de recursos. Esta investigação baseia-se nestas fontes externas e introduz um novo modelo semântico baseado no Modelo de Arquitectura de Referência para Indústria 4.0 (RAMI 4.0), em conformidade com a abordagem de canalisação de dados no domínio da produção inteligente como caso exemplar de utilização para permitir uma fácil exploração, compreensão, descoberta, selecção e extracção de dados
    corecore