39,045 research outputs found

    A New role of ontologies and advanced scientific visualization in big data analytics

    Get PDF
    Accessing and contextual semantic searching structured, semi-structured and unstructured information resources and their ontology based analysis in a uniform way across text-free Big Data query implementation is a main feature of approach under discussion. To increase the semantic power of query results’ analysis the ontology based implementation of multiplatform adaptive tools of scientific visualization are demonstrated. The ontologies are used not for integration of heterogeneous resources in traditional way but for parallel analysis of recourses and its related ontologies to achieve the effect of a virtual integration

    Creating NoSQL Biological Databases with Ontologies for Query Relaxation

    Get PDF
    AbstractThe complexity of building biological databases is well-known and ontologies play an extremely important role in biological databases. However, much of the emphasis on the role of ontologies in biological databases has been on the construction of databases. In this paper, we explore a somewhat overlooked aspect regarding ontologies in biological databases, namely, how ontologies can be used to assist better database retrieval. In particular, we show how ontologies can be used to revise user submitted queries for query relaxation. In addition, since our research is conducted at today's “big data” era, our investigation is centered on NoSQL databases which serve as a kind of “representatives” of big data. This paper contains two major parts: First we describe our methodology of building two NoSQL application databases (MongoDB and AllegroGraph) using GO ontology, and then discuss how to achieve query relaxation through GO ontology. We report our experiments and show sample queries and results. Our research on query relaxation on NoSQL databases is complementary to existing work in big data and in biological databases and deserves further exploration

    Applications of big knowledge summarization

    Get PDF
    Advanced technologies have resulted in the generation of large amounts of data ( Big Data ). The Big Knowledge derived from Big Data could be beyond humans\u27 ability of comprehension, which will limit the effective and innovative use of Big Knowledge repository. Biomedical ontologies, which play important roles in biomedical information systems, constitute one kind of Big Knowledge repository. Biomedical ontologies typically consist of domain knowledge assertions expressed by the semantic connections between tens of thousands of concepts. Without some high-level visual representation of Big Knowledge in biomedical ontologies, humans cannot grasp the big picture of those ontologies. Such Big Knowledge orientation is required for the proper maintenance of ontologies and their effective use. This dissertation is addressing the Big Knowledge challenge - How to enable humans to use Big Knowledge correctly and effectively (referred to as the Big Knowledge to Use (BK2U) problem) - with a focus on biomedical ontologies. In previous work, Abstraction Networks (AbNs) have been demonstrated successful for the summarization, visualization and quality assurance (QA) of biomedical ontologies. Based on the previous research, this dissertation introduces new AbNs of various granularities for Big Knowledge summarization and extends the applications of AbNs. This dissertation consists of three main parts. The first part introduces two advanced AbNs. One is the weighted aggregate partial-area taxonomy with a parameter to flexibly control the summarization granularity. The second is the Ingredient Abstraction Network (IAbN) for the National Drug File - Reference Terminology (NDF-RT) Chemical Ingredients hierarchy, for which the previously developed AbNs for hierarchies with outgoing relationships, are not applicable. Since NDF-RT\u27s Chemical Ingredients hierarchy has no outgoing relationships. The second part describes applications of the two advanced AbNs. A study utilizing the weighted aggregate partial-area taxonomy for the identification of major topics in SNOMED CT\u27s Specimen hierarchy is reported. A multi-layer interactive visualization system of required granularity for ontology comprehension, based on the weighted aggregate partial-area taxonomy, is demonstrated to comprehend the Neoplasm subhierarchy of National Cancer Institute thesaurus (NCIt). The IAbN is applied for drug-drug interaction (DDI) discovery. The third part reports eight family-based QA studies on NCIt\u27s Neoplasm, Gene, and Biological Process hierarchies, SNOMED CT\u27s Infectious disease hierarchy, the Chemical Entities of Biological Interest ontology, and the Chemical Ingredients hierarchy in NDF-RT. There is no one-size-fits-all QA method and it is impossible to find a QA method for each individual ontology. Hence, family-based QA is an effective way, i.e., one QA technique could be applicable to a whole family of structurally similar ontologies. The results of these studies demonstrate that complex concepts and uncommonly modeled concepts are more likely to have errors. Furthermore, the three studies on overlapping concepts in partial-area taxonomies reported in this dissertation combined with previous three studies prove the success of overlapping concepts as a QA methodology for a whole family of 76 similar ontologies in BioPortal

    The Form of Organization for Small Business

    Get PDF
    Matching and integrating ontologies has been a desirable technique in areas such as data fusion, knowledge integration, the Semantic Web and the development of advanced services in distributed system. Unfortunately, the heterogeneities of ontologies cause big obstacles in the development of this technique. This licentiate thesis describes an approach to tackle the problem of ontology integration using description logics and production rules, both on a syntactic level and on a semantic level. Concepts in ontologies are matched and integrated to generate ontology intersections. Context is extracted and rules for handling heterogeneous ontology reasoning with contexts are developed. Ontologies are integrated by two processes. The first integration is to generate an ontology intersection from two OWL ontologies. The result is an ontology intersection, which is an independent ontology containing non-contradictory assertions based on the original ontologies. The second integration is carried out by rules that extract context, such as ontology content and ontology description data, e.g. time and ontology creator. The integration is designed for conceptual ontology integration. The information of instances isn't considered, neither in the integrating process nor in the integrating results. An ontology reasoner is used in the integration process for non-violation check of two OWL ontologies and a rule engine for handling conflicts according to production rules. The ontology reasoner checks the satisfiability of concepts with the help of anchors, i.e. synonyms and string-identical entities; production rules are applied to integrate ontologies, with the constraint that the original ontologies should not be violated. The second integration process is carried out with production rules with context data of the ontologies. Ontology reasoning, in a repository, is conducted within the boundary of each ontology. Nonetheless, with context rules, reasoning is carried out across ontologies. The contents of an ontology provide context for its defined entities and are extracted to provide context with the help of an ontology reasoner. Metadata of ontologies are criteria that are useful for describing ontologies. Rules using context, also called context rules, are developed and in-built in the repository. New rules can also be added. The scientific contribution of the thesis is the suggested approach applying semantic based techniques to provide a complementary method for ontology matching and integrating semantically. With the illustration of the ontology integration process and the context rules and a few manually integrated ontology results, the approach shows the potential to help to develop advanced knowledge-based services.QC 20130201</p

    Ontology based data warehousing for mining of heterogeneous and multidimensional data sources

    Get PDF
    Heterogeneous and multidimensional big-data sources are virtually prevalent in all business environments. System and data analysts are unable to fast-track and access big-data sources. A robust and versatile data warehousing system is developed, integrating domain ontologies from multidimensional data sources. For example, petroleum digital ecosystems and digital oil field solutions, derived from big-data petroleum (information) systems, are in increasing demand in multibillion dollar resource businesses worldwide. This work is recognized by Industrial Electronic Society of IEEE and appeared in more than 50 international conference proceedings and journals

    Km4City Ontology Building vs Data Harvesting and Cleaning for Smart-city Services

    Get PDF
    Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-Store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection
    • …
    corecore