12 research outputs found

    Automatically changing modules in modular ontology development and management

    Get PDF
    Modularity has been proposed as a solution to deal with large ontologies. This requires, various module management tasks, such as swapping an outdated module for a new one or a computationally costly one for a leaner fragment. No mechanism exists to exchange an arbitrary module automatically. To solve this manual task, we modify the SUGOI algorithm into SUGOI-Gen; with SUGOI-Gen, one can swap any module within a modular system, implemented it, and wrapped a GUI around it. We carried out an experimental evaluation with six ontologies covering three different use-cases to determine whether arbitrary interchangeability is practically doable, and to what extent such changes affect the quality of the module and automated reasoning over it. The results are positive, with the success rate varying between 22-100% depending on the number of mappings between the source and target module. The evaluation also revealed that the interchangeability does indeed have an impact on a module’s metrics. Regarding reasoning, when comparing an original ontology to one where a module has been swapped, the processing time is greatly improved for all except one of the swapped modules in the set

    An ontology-based recommender system using scholar's background knowledge

    Get PDF
    Scholar’s recommender systems recommend scientific articles based on the similarity of articles to scholars’ profiles, which are a collection of keywords that scholars are interested in. Recent profiling approaches extract keywords from the scholars’ information such as publications, searching keywords, and homepages, and train a reference ontology, which is often a general-purpose ontology, in order to profile the scholars’ interests. However, such approaches do not consider the scholars’ knowledge because the recommender system only recommends articles which are syntactically similar to articles that scholars have already visited, while scholars are interested in articles which contain comparatively new knowledge. In addition, the systems do not support multi-area property of scholars’ knowledge as researchers usually do research in multiple topics simultaneously and are expected to receive focused-topic articles in each recommendation. To address these problems, this study develops a domain-specific reference ontology by merging six Web taxonomies and exploits Wikipedia as a conflict resolver of ontologies. Then, the knowledge items from the scholars’ information are extracted, transformed by DBpedia, and clustered into relevant topics in order to model the multi-area property of scholars’ knowledge. Finally, the clustered knowledge items are mapped to the reference ontology by using DBpedia to create clustered profiles. In addition a semantic similarity algorithm is adapted to the clustered profiles, which enables recommendation of focused-topic articles that contain new knowledge. To evaluate performance of the proposed approach, three different data sets from scholars’ information in Computer Science domain are created, and the precisions in different cases are measured. The proposed method, in comparison with the baseline methods, improves the average precision by 6% when the new reference ontology along with the full scholars’ knowledge is utilized, by an extra 7.2% when scholars’ knowledge is transformed by DBpedia, and further 8.9% when clustered profile is applied. Experimental results certify that using knowledge items instead of keywords for profiling as well as transforming the knowledge items by DBpedia can significantly improve the recommendation performance. Besides, the domain-specific reference ontology can effectively capture the full scholars’ knowledge which results to more accurate profiling

    Complex question answering on semi-structured repositories: a user centric process enhanced with context

    Get PDF
    A Teia Mundial (Web) foi prevista como uma rede de documentos de hipertexto interligados de forma a criar uma espaço de informação onde humanos e máquinas poderiam comunicar. No entanto, a informação contida na Web tradicional foi/é armazenada de forma não estruturada o que leva a que apenas os humanos a possam consumir convenientemente. Consequentemente, a procura de informações na Web sintáctica é uma tarefa principalmente executada pelos humanos e nesse sentido nem sempre é fácil de concretizar. Neste contexto, tornou-se essencial a evolução para uma Web mais estruturada e mais significativa onde é dado significado bem definido à informação de forma a permitir a cooperação entre humanos e máquinas. Esta Web é usualmente referida como Web Semântica. Além disso, a Web Semântica é totalmente alcançável apenas se os dados de diferentes fontes forem ligados criando assim um repositório de Dados Abertos Ligados (LOD). Com o aparecimento de uma nova Web de Dados (Abertos) Ligados (i.e. a Web Semântica), novas oportunidades e desafios surgiram. Pergunta Resposta (QA) sobre informação semântica é actualmente uma área de investigação activa que tenta tirar vantagens do uso das tecnologias ligadas à Web Semântica para melhorar a tarefa de responder a questões. O principal objectivo do projecto World Search passa por explorar a Web Semântica para criar mecanismos que suportem os utilizadores de domínios de aplicação específicos a responder a questões complexas com base em dados oriundos de diferentes repositórios. No entanto, a avaliação feita ao estado da arte permite concluir que as aplicações existentes não suportam os utilizadores na resposta a questões complexas. Nesse sentido, o trabalho desenvolvido neste documento foca-se em estudar/desenvolver metodologias/processos que permitam ajudar os utilizadores a encontrar respostas exactas/corretas para questões complexas que não podem ser respondidas fazendo uso dos sistemas tradicionais. Tal inclui: (i) Ultrapassar a dificuldade dos utilizadores visionarem o esquema subjacente aos repositórios de conhecimento; (ii) Fazer a ponte entre a linguagem natural expressa pelos utilizadores e a linguagem (formal) entendível pelos repositórios; (iii) Processar e retornar informações relevantes que respondem apropriadamente às questões dos utilizadores. Para esse efeito, são identificadas um conjunto de funcionalidades que são consideradas necessárias para suportar o utilizador na resposta a questões complexas. É também fornecida uma descrição formal dessas funcionalidades. A proposta é materializada num protótipo que implementa as funcionalidades previamente descritas. As experiências realizadas com o protótipo desenvolvido demonstram que os utilizadores efectivamente beneficiam das funcionalidades apresentadas: ▪ Pois estas permitem que os utilizadores naveguem eficientemente sobre os repositórios de informação; ▪ O fosso entre as conceptualizações dos diferentes intervenientes é minimizado; ▪ Os utilizadores conseguem responder a questões complexas que não conseguiam responder com os sistemas tradicionais. Em suma, este documento apresenta uma proposta que comprovadamente permite, de forma orientada pelo utilizador, responder a questões complexas em repositórios semiestruturados.The World Wide Web (WWW) was envisioned as a network of interlinked hypertext documents thus creating an information space where humans and machines should be able to communicate. However, information published in the traditional WWW was/is unstructured and therefore is (mostly) consumable by humans only. As a consequence, searching and retrieving information in this syntactic and ever evolving WWW is a task that is mainly performed by humans and therefore it may not be trivial. In this sense, the evolution to a more structured and meaningful web where information is given well-defined meaning thus enabling cooperation between humans and machines is mandatory. This web is usually referred to as Semantic Web. Moreover, the Semantic Web is only fully achievable if data from different resources is connected in order to create a Linked Open Data (LOD) repository. This new Web of Linked (Open) Data (i.e. the Semantic Web) has opened a new set of opportunities but also some new challenges. Question Answering (QA) over semantic information is now an active research field that tries to take advantage of the Semantic Web technologies to improve the question answering task. In this sense, the main goal of this work is to help users finding accurate answers for complex questions that may not be answered using traditional systems. To achieve this goal, it is proposed a user centric process comprehending a set of functionalities that are iteratively, incrementally and interactively exploited. The proposed process and functionalities aim to help users building complex queries against semi-structured repositories (e.g. LOD repositories)

    Utilização de eventos aplicacionais para análise de negócio em tempo real

    Get PDF
    Trabalho final de mestrado para a obtenção de grau de mestre em Engenharia Informática e de ComputadoresOs dados recolhidos nos sistemas da NOS Inovação, para fins de processamento analítico, apresentam limitações ao nível da estrutura e vocabulário da mensagem. Este trabalho tem como objectivo caracterizar o formato e conteúdo das mensagens de logs que existem actualmente, e melhorar a sua estrutura e vocabulário permitindo simplificar e homogenizar o processo analitico. A identificação e definição do conteúdo das mensagens são a base para a definição de uma representação de conhecimento de eventos aplicacionais. Esta representação de conhecimento mento descritiva consiste na caracterização da TBOX e de ABOX no dominio da organização. A TBOX consiste na definição da ontologia de domínio deste trabalho, que especifica formalmente a estrutura e vocabulário das novas mensagens de log. Esta definição formal irá permitir que as mensagens sejam usadas para processamento de eventos complexos (CEP), na realização de operações de composição, agregação e interrogação. Para validação da estrutura e vocabulário da ontologia, foi desenvolvida uma biblioteca aplicacional. Esta biblioteca é composta por um modelo de dominio que representa a ontologia e ainda por mecanismos que permitem a sua instanciação, formatação e envio para processamento analitico em tempo real. As mensagens geradas por esta biblioteca acabam por representar a ABOX na descrição de eventos aplicacionais enquanto dominio de conhecimento deste trabalho. O facto de se tratar de uma biblioteca independente dos serviços da NOS Inovação, permite que qualquer sistema use esta biblioteca. Uma vez que a representação de conhecimento de eventos aplicacionais é feita pela ontologia (TBOX) e pela biblioteca (ABOX), garantimos que outras bibliotecas possam ser criadas baseadas na ontologia, gerando mensagens de logs com um vocabulário transversal à organização, e com uma estrutura de relações homogenea. A utilização da biblioteca permite que todos os serviços da NOS Inovação compativeis possam gerar mensagens de log que são interpretadas por humanos e também por processos automáticos de igual forma e sirva de base para a criação de bibliotecas para outros ambientes de execução, como por exemplo dispositivos móveis, ou CPEs.At NOS Inovação there is a need to improve the data collection process that characterizes the occurrence of customer use cases that occur in the organization's systems. The approach to this problem is to identify the concepts considered relevant to the organization by studying the current system of data collection used for analytical process and also analyzing the main use cases. The identification and definition of these concepts are the basis for the definition of a domain ontology that allows the definition of processes for processing complex events (CEP) with which it is possible to perform composite, aggregation and interrogation operations based on a normalized structure and with semantic meaning. CEP contains mechanisms to detect complex patterns and automatically generate responses when patterns are detected on recorded events. The demonstration of the use of this ontology consisted in the development of an application library whose domain is composed by the concepts defined in the ontology. The developed application library allows the registration of the application events in a homogeneous structure defined by the ontology and send this record in a standard format to a real-time event processing server.N/

    Construction de modèles de données relationnels temporalisés guidée par les ontologies

    Get PDF
    Au sein d’une organisation, de même qu’entre des organisations, il y a plusieurs intervenants qui doivent prendre des décisions en fonction de la vision qu’ils se font de l’organisation concernée, de son environnement et des interactions entre les deux. Dans la plupart des cas, les données sont fragmentées en plusieurs sources non coordonnées ce qui complique, notamment, le fait de retracer leur évolution chronologique. Ces différentes sources sont hétérogènes par leur structure, par la sémantique des données qu’elles contiennent, par les technologies informatiques qui les manipulent et par les règles de gouvernance qui les contrôlent. Dans ce contexte, un système de santé apprenant (Learning Health System) a pour objectif d’unifier les soins de santé, la recherche biomédicale et le transfert des connaissances, en offrant des outils et des services pour améliorer la collaboration entre les intervenants ; l’optique sous-jacente à cette collaboration étant de fournir à un individu de meilleurs services qui soient personnalisés. Les méthodes classiques de construction de modèle de données sont fondées sur des règles de pratique souvent peu précises, ad hoc, non automatisables. L’extraction des données d’intérêt implique donc d’importantes mobilisations de ressources humaines. De ce fait, la conciliation et l’agrégation des sources sont sans cesse à recommencer parce que les besoins ne sont pas tous connus à l’avance, qu’ils varient au gré de l’évolution des processus et que les données sont souvent incomplètes. Pour obtenir l’interopérabilité, il est nécessaire d’élaborer une méthode automatisée de construction de modèle de données qui maintient conjointement les données brutes des sources et leur sémantique. Cette thèse présente une méthode qui permet, une fois qu’un modèle de connaissance est choisi, la construction d’un modèle de données selon des critères fondamentaux issus d’un modèle ontologique et d’un modèle relationnel temporel basé sur la logique des intervalles. De plus, la méthode est semi- automatisée par un prototype, OntoRelα. D’une part, l’utilisation des ontologies pour définir la sémantique des données est un moyen intéressant pour assurer une meilleure interopérabilité sémantique étant donné que l’ontologie permet d’exprimer de façon exploitable automatiquement différents axiomes logiques qui permettent la description de données et de leurs liens. D’autre part, l’utilisation d’un modèle relationnel temporalisé permet l’uniformisation de la structure du modèle de données, l’intégration des contraintes temporelles ainsi que l’intégration des contraintes du domaine qui proviennent des ontologies.Within an organization, many stakeholders must make decisions based on their vision of the organization, its environment, and the interactions between these two. In most cases, the data are fragmented in several uncoordinated sources, making it difficult, in particular, to trace their chronological evolution. These different sources are heterogeneous in their structure, in the semantics of the data they contain, in the computer technologies that manipulate them, and in the governance rules that control them. In this context, a Learning Health System aims to unify health care, biomedical research and knowledge transfer by providing tools and services to enhance collaboration among stakeholders in the health system to provide better and personalized services to the patient. The implementation of such a system requires a common data model with semantics, structure, and consistent temporal traceability that ensures data integrity. Traditional data model design methods are based on vague, non-automatable best practice rules where the extraction of data of interest requires the involvement of very important human resources. The reconciliation and the aggregation of sources are constantly starting over again because not all needs are known in advance and vary with the evolution of processes and data are often incomplete. To obtain an interoperable data model, an automated construction method that jointly maintains the source raw data and their semantics is required. This thesis presents a method that build a data model according to fundamental criteria derived from an ontological model, a relational model and a temporal model based on the logic of intervals. In addition, the method is semi-automated by an OntoRelα prototype. On the one hand, the use of ontologies to define the semantics of data is an interesting way to ensure a better semantic interoperability since it automatically expresses different logical axioms allowing the description of data and their links. On the other hand, the use of a temporal relational model allows the standardization of data model structure and the integration of temporal constraints as well as the integration of domain constraints defines in the ontologies

    Engineering Background Knowledge for Social Robots

    Get PDF
    Social robots are embodied agents that continuously perform knowledge-intensive tasks involving several kinds of information coming from different heterogeneous sources. Providing a framework for engineering robots' knowledge raises several problems like identifying sources of information and modeling solutions suitable for robots' activities, integrating knowledge coming from different sources, evolving this knowledge with information learned during robots' activities, grounding perceptions on robots' knowledge, assessing robots' knowledge with respect humans' one and so on. In this thesis we investigated feasibility and benefits of engineering background knowledge of Social Robots with a framework based on Semantic Web technologies and Linked Data. This research has been supported and guided by a case study that provided a proof of concept through a prototype tested in a real socially assistive context
    corecore