76 research outputs found

    Generate FAIR Literature Surveys with Scholarly Knowledge Graphs

    Get PDF
    Reviewing scientific literature is a cumbersome, time consuming but crucial activity in research. Leveraging a scholarly knowledge graph, we present a methodology and a system for comparing scholarly literature, in particular research contributions describing the addressed problem, utilized materials, employed methods and yielded results. The system can be used by researchers to quickly get familiar with existing work in a specific research domain (e.g., a concrete research question or hypothesis). Additionally, it can be used to publish literature surveys following the FAIR Data Principles. The methodology to create a research contribution comparison consists of multiple tasks, specifically: (a) finding similar contributions, (b) aligning contribution descriptions, (c) visualizing and finally (d) publishing the comparison. The methodology is implemented within the Open Research Knowledge Graph (ORKG), a scholarly infrastructure that enables researchers to collaboratively describe, find and compare research contributions. We evaluate the implementation using data extracted from published review articles. The evaluation also addresses the FAIRness of comparisons published with the ORKG

    The evaluation of ontologies: quality, reuse and social factors

    Get PDF
    Finding a “good” or the “right” ontology is a growing challenge in the ontology domain, where one of the main aims is to share and reuse existing semantics and knowledge. Before reusing an ontology, knowledge engineers not only have to find a set of appropriate ontologies for their search query, but they should also be able to evaluate those ontologies according to different internal and external criteria. Therefore, ontology evaluation is at the heart of ontology selection and has received a considerable amount of attention in the literature.Despite the importance of ontology evaluation and selection and the widespread research on these topics, there are still many unanswered questions and challenges when it comes to evaluating and selecting ontologies for reuse. Most of the evaluation metrics and frameworks in the literature are mainly based on a limited set of internal characteristics, e.g., content and structure of ontologies and ignore how they are used and evaluated by communities. This thesis aimed to investigate the notion of quality and reusability in the ontology domain and to explore and identify the set of metrics that can affect the process of ontology evaluation and selection for reuse. [Continues.

    A semantic framework for ontology usage analysis

    Get PDF
    The Semantic Web envisions a Web where information is accessible and processable by computers as well as humans. Ontologies are the cornerstones for realizing this vision of the Semantic Web by capturing domain knowledge by defining the terms and the relationship between these terms to provide a formal representation of the domain with machine-understandable semantics. Ontologies are used for semantic annotation, data interoperability and knowledge assimilation and dissemination.In the literature, different approaches have been proposed to build and evolve ontologies, but in addition to these, one more important concept needs to be considered in the ontology lifecycle, that is, its usage. Measuring the “usage” of ontologies will help us to effectively and efficiently make use of semantically annotated structured data published on the Web (formalized knowledge published on the Web), improve the state of ontology adoption and reusability, provide a usage-based feedback loop to the ontology maintenance process for a pragmatic conceptual model update, and source information accurately and automatically which can then be utilized in the other different areas of the ontology lifecycle. Ontology Usage Analysis is the area which evaluates, measures and analyses the use of ontologies on the Web. However, in spite of its importance, no formal approach is present in the literature which focuses on measuring the use of ontologies on the Web. This is in contrast to the approaches proposed in the literature on the other concepts of the ontology lifecycle, such as ontology development, ontology evaluation and ontology evolution. So, to address this gap, this thesis is an effort in such a direction to assess, analyse and represent the use of ontologies on the Web.In order to address the problem and realize the abovementioned benefits, an Ontology Usage Analysis Framework (OUSAF) is presented. The OUSAF Framework implements a methodological approach which is comprised of identification, investigation, representation and utilization phases. These phases provide a complete solution for usage analysis by allowing users to identify the key ontologies, and investigate, represent and utilize usage analysis results. Various computation components with several methods, techniques, and metrics for each phase are presented and evaluated using the Semantic Web data crawled from the Web. For the dissemination of ontology-usage-related information accessible to machines and humans, The U Ontology is presented to formalize the conceptual model of the ontology usage domain. The evaluation of the framework, solution components, methods, and a formalized conceptual model is presented, indicating the usefulness of the overall proposed solution

    Pattern-based design applied to cultural heritage knowledge graphs

    Full text link
    Ontology Design Patterns (ODPs) have become an established and recognised practice for guaranteeing good quality ontology engineering. There are several ODP repositories where ODPs are shared as well as ontology design methodologies recommending their reuse. Performing rigorous testing is recommended as well for supporting ontology maintenance and validating the resulting resource against its motivating requirements. Nevertheless, it is less than straightforward to find guidelines on how to apply such methodologies for developing domain-specific knowledge graphs. ArCo is the knowledge graph of Italian Cultural Heritage and has been developed by using eXtreme Design (XD), an ODP- and test-driven methodology. During its development, XD has been adapted to the need of the CH domain e.g. gathering requirements from an open, diverse community of consumers, a new ODP has been defined and many have been specialised to address specific CH requirements. This paper presents ArCo and describes how to apply XD to the development and validation of a CH knowledge graph, also detailing the (intellectual) process implemented for matching the encountered modelling problems to ODPs. Relevant contributions also include a novel web tool for supporting unit-testing of knowledge graphs, a rigorous evaluation of ArCo, and a discussion of methodological lessons learned during ArCo development

    Semantic Management of Location-Based Services in Wireless Environments

    Get PDF
    En los últimos años el interés por la computación móvil ha crecido debido al incesante uso de dispositivos móviles (por ejemplo, smartphones y tablets) y su ubicuidad. El bajo coste de dichos dispositivos unido al gran número de sensores y mecanismos de comunicación que equipan, hace posible el desarrollo de sistemas de información útiles para sus usuarios. Utilizando un cierto tipo especial de sensores, los mecanismos de posicionamiento, es posible desarrollar Servicios Basados en la Localización (Location-Based Services o LBS en inglés) que ofrecen un valor añadido al considerar la localización de los usuarios de dispositivos móviles para ofrecerles información personalizada. Por ejemplo, se han presentado numerosos LBS entre los que se encuentran servicios para encontrar taxis, detectar amigos en las cercanías, ayudar a la extinción de incendios, obtener fotos e información de los alrededores, etc. Sin embargo, los LBS actuales están diseñados para escenarios y objetivos específicos y, por lo tanto, están basados en esquemas predefinidos para el modelado de los elementos involucrados en estos escenarios. Además, el conocimiento del contexto que manejan es implícito; razón por la cual solamente funcionan para un objetivo específico. Por ejemplo, en la actualidad un usuario que llega a una ciudad tiene que conocer (y comprender) qué LBS podrían darle información acerca de medios de transporte específicos en dicha ciudad y estos servicios no son generalmente reutilizables en otras ciudades. Se han propuesto en la literatura algunas soluciones ad hoc para ofrecer LBS a usuarios pero no existe una solución general y flexible que pueda ser aplicada a muchos escenarios diferentes. Desarrollar tal sistema general simplemente uniendo LBS existentes no es sencillo ya que es un desafío diseñar un framework común que permita manejar conocimiento obtenido de datos enviados por objetos heterogéneos (incluyendo datos textuales, multimedia, sensoriales, etc.) y considerar situaciones en las que el sistema tiene que adaptarse a contextos donde el conocimiento cambia dinámicamente y en los que los dispositivos pueden usar diferentes tecnologías de comunicación (red fija, inalámbrica, etc.). Nuestra propuesta en la presente tesis es el sistema SHERLOCK (System for Heterogeneous mobilE Requests by Leveraging Ontological and Contextual Knowledge) que presenta una arquitectura general y flexible para ofrecer a los usuarios LBS que puedan serles interesantes. SHERLOCK se basa en tecnologías semánticas y de agentes: 1) utiliza ontologías para modelar la información de usuarios, dispositivos, servicios, y el entorno, y un razonador para manejar estas ontologías e inferir conocimiento que no ha sido explicitado; 2) utiliza una arquitectura basada en agentes (tanto estáticos como móviles) que permite a los distintos dispositivos SHERLOCK intercambiar conocimiento y así mantener sus ontologías locales actualizadas, y procesar peticiones de información de sus usuarios encontrando lo que necesitan, allá donde esté. El uso de estas dos tecnologías permite a SHERLOCK ser flexible en términos de los servicios que ofrece al usuario (que son aprendidos mediante la interacción entre los dispositivos), y de los mecanismos para encontrar la información que el usuario quiere (que se adaptan a la infraestructura de comunicación subyacente)

    The construction of a linguistic linked data framework for bilingual lexicographic resources

    Get PDF
    Little-known lexicographic resources can be of tremendous value to users once digitised. By extending the digitisation efforts for a lexicographic resource, converting the human readable digital object to a state that is also machine-readable, structured data can be created that is semantically interoperable, thereby enabling the lexicographic resource to access, and be accessed by, other semantically interoperable resources. The purpose of this study is to formulate a process when converting a lexicographic resource in print form to a machine-readable bilingual lexicographic resource applying linguistic linked data principles, using the English-Xhosa Dictionary for Nurses as a case study. This is accomplished by creating a linked data framework, in which data are expressed in the form of RDF triples and URIs, in a manner which allows for extensibility to a multilingual resource. Click languages with characters not typically represented by the Roman alphabet are also considered. The purpose of this linked data framework is to define each lexical entry as “historically dynamic”, instead of “ontologically static” (Rafferty, 2016:5). For a framework which has instances in constant evolution, focus is thus given to the management of provenance and linked data generation thereof. The output is an implementation framework which provides methodological guidelines for similar language resources in the interdisciplinary field of Library and Information Science

    Ontology Pattern-Based Data Integration

    Get PDF
    Data integration is concerned with providing a unified access to data residing at multiple sources. Such a unified access is realized by having a global schema and a set of mappings between the global schema and the local schemas of each data source, which specify how user queries at the global schema can be translated into queries at the local schemas. Data sources are typically developed and maintained independently, and thus, highly heterogeneous. This causes difficulties in integration because of the lack of interoperability in the aspect of architecture, data format, as well as syntax and semantics of the data. This dissertation represents a study on how small, self-contained ontologies, called ontology design patterns, can be employed to provide semantic interoperability in a cross-repository data integration system. The idea of this so-called ontology pattern- based data integration is that a collection of ontology design patterns can act as the global schema that still contains sufficient semantics, but is also flexible and simple enough to be used by linked data providers. On the one side, this differs from existing ontology-based solutions, which are based on large, monolithic ontologies that provide very rich semantics, but enforce too restrictive ontological choices, hence are shunned by many data providers. On the other side, this also differs from the purely linked data based solutions, which do offer simplicity and flexibility in data publishing, but too little in terms of semantic interoperability. We demonstrate the feasibility of this idea through the actual development of a large scale data integration project involving seven ocean science data repositories from five institutions in the U.S. In addition, we make two contributions as part of this dissertation work, which also play crucial roles in the aforementioned data integration project. First, we develop a collection of more than a dozen ontology design patterns that capture the key notions in the ocean science occurring in the participating data repositories. These patterns contain axiomatization of the key notions and were developed with an intensive involvement from the domain experts. Modeling of the patterns was done in a systematic workflow to ensure modularity, reusability, and flexibility of the whole pattern collection. Second, we propose the so-called pattern views that allow data providers to publish their data in very simple intermediate schema and show that they can greatly assist data providers to publish their data without requiring a thorough understanding of the axiomatization of the patterns

    AgroPortal: a vocabulary and ontology repository for agronomy

    Get PDF
    Many vocabularies and ontologies are produced to represent and annotate agronomic data. However, those ontologies are spread out, in different formats, of different size, with different structures and from overlapping domains. Therefore, there is need for a common platform to receive and host them, align them, and enabling their use in agro-informatics applications. By reusing the National Center for Biomedical Ontologies (NCBO) BioPortal technology, we have designed AgroPortal, an ontology repository for the agronomy domain. The AgroPortal project re-uses the biomedical domain’s semantic tools and insights to serve agronomy, but also food, plant, and biodiversity sciences. We offer a portal that features ontology hosting, search, versioning, visualization, comment, and recommendation; enables semantic annotation; stores and exploits ontology alignments; and enables interoperation with the semantic web. The AgroPortal specifically satisfies requirements of the agronomy community in terms of ontology formats (e.g., SKOS vocabularies and trait dictionaries) and supported features (offering detailed metadata and advanced annotation capabilities). In this paper, we present our platform’s content and features, including the additions to the original technology, as well as preliminary outputs of five driving agronomic use cases that participated in the design and orientation of the project to anchor it in the community. By building on the experience and existing technology acquired from the biomedical domain, we can present in AgroPortal a robust and feature-rich repository of great value for the agronomic domain. Keyword

    SeMoM: a semantic middleware for IoT healthcare applications

    Get PDF
    De nos jours, l'internet des objets (IoT) connaît un intérêt considérable tant de la part du milieu universitaire que de l'industrie. Il a contribué à améliorer la qualité de vie, la croissance des entreprises et l'efficacité dans de multiples domaines. Cependant, l'hétérogénéité des objets qui peuvent être connectés dans de tels environnements, rend difficile leur interopérabilité. En outre, les observations produites par ces objets sont générées avec différents vocabulaires et formats de données. Cette hétérogénéité de technologies dans le monde IoT rend nécessaire l'adoption de solutions génériques à l'échelle mondiale. De plus, elle rend difficile le partage et la réutilisation des données dans d'autres buts que ceux pour lesquels elles ont été initialement mises en place. Dans cette thèse, nous abordons ces défis dans le contexte des applications de santé. Pour cela, nous proposons de transformer les données brutes issues de capteurs en connaissances et en informations en s'appuyant sur les ontologies. Ces connaissances vont être partagées entre les différents composants du système IoT. En ce qui concerne les défis d'hétérogénéité et d'interopérabilité, notre contribution principale est une architecture IoT utilisant des ontologies pour permettre le déploiement d'applications IoT sémantiques. Cette approche permet de partager les observations des capteurs, la contextualisation des données et la réutilisation des connaissances et des informations traitées. Les contributions spécifiques comprennent : * Conception d'une ontologie " Cognitive Semantic Sensor Network ontology (CoSSN) " : Cette ontologie vise à surmonter les défis d'interopérabilité sémantiques introduits par la variété des capteurs potentiellement utilisés. CoSSN permet aussi de modéliser la représentation des connaissances des experts. * Conception et mise en œuvre de SeMoM: SeMoM est une architecture flexible pour l'IoT intégrant l'ontologie CoSSN. Elle s'appuie sur un middleware orienté message (MoM) pour offrir une solution à couplage faible entre les composants du système. Ceux-ci peuvent échanger des données d'observation sémantiques de manière flexible à l'aide du paradigme producteur/consommateur. Du point de vue applicatif, nous sommes intéressés aux applications de santé. Dans ce domaine, les approches spécifiques et les prototypes individuels sont des solutions prédominantes ce qui rend difficile la collaboration entre différentes applications, en particulier dans un cas de patients multi-pathologies. En ce qui concerne ces défis, nous nous sommes intéressés à deux études de cas: 1) la détection du risque de développement des escarres chez les personnes âgées et 2) la détection des activités de la vie quotidienne (ADL) de personnes pour le suivi et l'assistance à domicile : * Nous avons développé des extensions de CoSSN pour décrire chaque concept en lien avec les deux cas d'utilisation. Nous avons également développé des applications spécifiques grâce à SeMoM qui mettent en œuvre des règles de connaissances expertes permettant d'évaluer et de détecter les escarres et les activités. * Nous avons mis en œuvre et évaluer le framework SeMoM en se basant sur deux expérimentations. La première basée sur le déploiement d'un système ciblant la détection des activités ADL dans un laboratoire d'expérimentation pour la santé (le Connected Health Lab). La seconde est basée sur le simulateur d'activités ADLSim développé par l'Université d'Oslo. Ce simulateur a été utilisé pour effectuer des tests de performances de notre solution en générant une quantité massive de données sur les activités d'une personne à domicile.Nowadays, the adoption of the Internet of Things (IoT) has received a considerable interest from both academia and industry. It provides enhancements in quality of life, business growth and efficiency in multiple domains. However, the heterogeneity of the "Things" that can be connected in such environments makes interoperability among them a challenging problem. Moreover, the observations produced by these "Things" are made available with heterogeneous vocabularies and data formats. This heterogeneity prevents generic solutions from being adopted on a global scale and makes difficult to share and reuse data for other purposes than those for which they were originally set up. In this thesis, we address these challenges in the context of healthcare applications considering how we transform raw data to cognitive knowledge and ontology-based information shared between IoT system components. With respect to heterogeneity and integration challenges, our main contribution is an ontology-based IoT architecture allowing the deployment of semantic IoT applications. This approach allows sharing of sensors observations, contextualization of data and reusability of knowledge and processed information. Specific contributions include: * Design of the Cognitive Semantic Sensor Network ontology (CoSSN) ontology: CoSSN aims at overcoming the semantic interoperability challenges introduced by the variety of sensors potentially used. It also aims at describing expert knowledge related to a specific domain. * Design and implementation of SeMoM: SeMoM is a flexible IoT architecture built on top of CoSSN ontology. It relies on a message oriented middleware (MoM) following the publish/subscribe paradigm for a loosely coupled communication between system components that can exchange semantic observation data in a flexible way. From the applicative perspective, we focus on healthcare applications. Indeed, specific approaches and individual prototypes are preeminent solutions in healthcare which straighten the need of an interoperable solution especially for patients with multiple affections. With respect to these challenges, we elaborated two case studies 1) bedsore risk detection and 2) Activities of Daily Living (ADL) detection as follows: * We developed extensions of CoSSN to describe each domain concepts and we developed specific applications through SeMoM implementing expert knowledge rules and assessments of bedsore and human activities. * We implemented and evaluated the SeMoM framework in order to provide a proof of concept of our approach. Two experimentations have been realized for that target. The first is based on a deployment of a system targeting the detection of ADL activities in a real smart platform. The other one is based on ADLSim, a simulator of activities for ambient assisted living that can generate a massive amount of data related to the activities of a monitored person

    Exploiting Ontology Recommendation Using Text Categorization Approach

    Get PDF
    Semantic Web is considered as the backbone of web 3.0 and ontologies are an integral part of the Semantic Web. Though an increase of ontologies in different domains is reported due to various benefits which include data heterogeneity, automated information analysis, and reusability, however, finding an appropriate ontology according to user requirement remains cumbersome task due to time and efforts required, context-awareness, and computational complexity. To overcome these issues, an ontology recommendation framework is proposed. The Proposed framework employs text categorization and unsupervised learning techniques. The benefits of the proposed framework are twofold: 1) ontology organization according to the opinion of domain experts and 2) ontology recommendation with respect to user requirement. Moreover, an evaluation model is also proposed to assess the effectiveness of the proposed framework in terms of ontologies organization and recommendation. The main consequences of the proposed framework are 1) ontologies of a corpus can be organized effectively, 2) no effort and time are required to select an appropriate ontology, 3) computational complexity is only limited to the use of unsupervised learning techniques, and 4) due to no requirement of context awareness, the proposed framework can be effective for any corpus or online libraries of ontologies
    corecore