171,464 research outputs found

    ODDI : a framework for semi-automatic data integration

    Get PDF
    Recent works on Business Intelligence do highlight the need of on-time, trustable and sound data access systems. Moreover the application of these systems in a flexible and dynamic environment requires for an approach based on automatic procedures that can provide reliable results. A crucial factor for any automatic data integration system is the matching process. Different categories of matching operators carry different semantics. For this reason combining them in a single algorithm is a non trivial process that have to take into account a variety of options. This paper proposes a solution based on a categorization of matching operators that allow to group similar attributes on a semantic rich form. This way we define all the information need in order to create a mapping. Then Mapping Generation is activated only on those set of elements that can be queried without violating any integrity constraints on data

    OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The OpenTox Framework, developed by the partners in the OpenTox project (<url>http://www.opentox.org</url>), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing.</p> <p>Results</p> <p>The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in <it>in vivo</it> studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.</p> <p>OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.</p> <p>The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists).</p> <p>Availability</p> <p>The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page <url>http://www.opentox.org/dev/ontology</url>; the OpenTox ontology is available as OWL at <url>http://opentox.org/api/1 1/opentox.owl</url>, the ToxML - OWL conversion utility is an open source resource available at <url>http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/</url></p

    Semi-automatic geometric digital twinning for existing buildings based on images and CAD drawings

    Get PDF
    Despite the emerging new data capturing technologies and advanced modelling systems, the process of geometric digital twin modelling for existing buildings still lacks a systematic and completed framework to streamline. As-is Building Information Model (BIM) is one of the commonly used geometric digital twin modelling approaches. However, the process of as-is BIM construction is time-consuming and needed to improve. To address this challenge, in this paper, a semi-automatic approach is developed to establish a systematic, accurate and convenient digital twinning system based on images and CAD drawings. With this ultimate goal, this paper summarises the state-of-the-art geometric digital twinning methods and elaborates on the methodological framework of this semi-automatic geometric digital twinning approach. The framework consists of three modules. The Building Framework Construction and Geometry Information Extraction (Module 1) defines the locations of each structural component through recognising special symbols in a floor plan and then extracting data from CAD drawings using the Optical Character Recognition (OCR) technology. Meaningful text information is further filtered based on predefined rules. In order to integrate with completed building information, the Building Information Complementary (Module 2) is developed based on neuro-fuzzy system (NFS) and the image processing procedure to supplement additional building components. Finally, the Information Integration and IFC Creation (Module 3) integrates information from Module 1 and 2 and creates as-is Industry Foundation Classes (IFC) BIM based on IFC schema. A case study using part of an office building and the results of its analysis are provided and discussed from the perspectives of applicability and accuracy. Future works and limitations are also addressed

    Ontology mapping: the state of the art

    No full text
    Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping

    Requirements for Information Extraction for Knowledge Management

    Get PDF
    Knowledge Management (KM) systems inherently suffer from the knowledge acquisition bottleneck - the difficulty of modeling and formalizing knowledge relevant for specific domains. A potential solution to this problem is Information Extraction (IE) technology. However, IE was originally developed for database population and there is a mismatch between what is required to successfully perform KM and what current IE technology provides. In this paper we begin to address this issue by outlining requirements for IE based KM

    Knowledge-based reasoning and recommendation framework for intelligent decision making

    Get PDF
    Copyright © 2018 John Wiley & Sons, Ltd A physical activity recommendation system promotes active lifestyles for users. Real-world reasoning and recommendation systems face the issues of data and knowledge integration, knowledge acquisition, and accurate recommendation generation. The knowledge-based reasoning and recommendation framework (KRF) proposed here, which accurately generates reliable recommendations and educational facts for users, could solve those issues. The KRF methodology focuses on integrating data with knowledge, rule-based reasoning, and conflict resolution. The integration issue is resolved using a semi-automatic mapping approach in which rule conditions are mapped to data schema. The rule-based reasoning methodology uses explicit rules with a maximum-specificity conflict resolution strategy to ensure the generation of appropriate and correct recommendations. The data used during the reasoning process are generated in real time from users\u27 physical activities and personal profiles in order to personalize recommendations. The proposed KRF is part of a wellness and health care platform, Mining Minds, and has been tested in the Mining Minds integrated environment using a sedentary user behaviour scenario. To evaluate the KRF methodology, a stand-alone, open-source application (Version 1.0) was released and tested using a dataset of 10 volunteers with 40 different types of sedentary behaviours. The KRF performance was measured using average execution time and recommendation accuracy

    A Semantic Web of Know-How: Linked Data for Community-Centric Tasks

    Full text link
    This paper proposes a novel framework for representing community know-how on the Semantic Web. Procedural knowledge generated by web communities typically takes the form of natural language instructions or videos and is largely unstructured. The absence of semantic structure impedes the deployment of many useful applications, in particular the ability to discover and integrate know-how automatically. We discuss the characteristics of community know-how and argue that existing knowledge representation frameworks fail to represent it adequately. We present a novel framework for representing the semantic structure of community know-how and demonstrate the feasibility of our approach by providing a concrete implementation which includes a method for automatically acquiring procedural knowledge for real-world tasks.Comment: 6th International Workshop on Web Intelligence & Communities (WIC14), Proceedings of the companion publication of the 23rd International Conference on World Wide Web (WWW 2014
    • …
    corecore