4,735 research outputs found

    Integrating Distributed Sources of Information for Construction Cost Estimating using Semantic Web and Semantic Web Service technologies

    Get PDF
    A construction project requires collaboration of several organizations such as owner, designer, contractor, and material supplier organizations. These organizations need to exchange information to enhance their teamwork. Understanding the information received from other organizations requires specialized human resources. Construction cost estimating is one of the processes that requires information from several sources including a building information model (BIM) created by designers, estimating assembly and work item information maintained by contractors, and construction material cost data provided by material suppliers. Currently, it is not easy to integrate the information necessary for cost estimating over the Internet. This paper discusses a new approach to construction cost estimating that uses Semantic Web technology. Semantic Web technology provides an infrastructure and a data modeling format that enables accessing, combining, and sharing information over the Internet in a machine processable format. The estimating approach presented in this paper relies on BIM, estimating knowledge, and construction material cost data expressed in a web ontology language. The approach presented in this paper makes the various sources of estimating data accessible as Simple Protocol and Resource Description Framework Query Language (SPARQL) endpoints or Semantic Web Services. We present an estimating application that integrates distributed information provided by project designers, contractors, and material suppliers for preparing cost estimates. The purpose of this paper is not to fully automate the estimating process but to streamline it by reducing human involvement in repetitive cost estimating activities

    An Ontological Approach to Representing the Product Life Cycle

    Get PDF
    The ability to access and share data is key to optimizing and streamlining any industrial production process. Unfortunately, the manufacturing industry is stymied by a lack of interoperability among the systems by which data are produced and managed, and this is true both within and across organizations. In this paper, we describe our work to address this problem through the creation of a suite of modular ontologies representing the product life cycle and its successive phases, from design to end of life. We call this suite the Product Life Cycle (PLC) Ontologies. The suite extends proximately from The Common Core Ontologies (CCO) used widely in defense and intelligence circles, and ultimately from the Basic Formal Ontology (BFO), which serves as top level ontology for the CCO and for some 300 further ontologies. The PLC Ontologies were developed together, but they have been factored to cover particular domains such as design, manufacturing processes, and tools. We argue that these ontologies, when used together with standard public domain alignment and browsing tools created within the context of the Semantic Web, may offer a low-cost approach to solving increasingly costly problems of data management in the manufacturing industry

    Revision in networks of ontologies

    Get PDF
    euzenat2015aInternational audienceNetworks of ontologies are made of a collection of logic theories, called ontologies, related by alignments. They arise naturally in distributed contexts in which theories are developed and maintained independently, such as the semantic web. In networks of ontologies, inconsistency can come from two different sources: local inconsistency in a particular ontology or alignment, and global inconsistency between them. Belief revision is well-defined for dealing with ontologies; we investigate how it can apply to networks of ontologies. We formulate revision postulates for alignments and networks of ontologies based on an abstraction of existing semantics of networks of ontologies. We show that revision operators cannot be simply based on local revision operators on both ontologies and alignments. We adapt the partial meet revision framework to networks of ontologies and show that it indeed satisfies the revision postulates. Finally, we consider strategies based on network characteristics for designing concrete revision operators

    SNOMED CT standard ontology based on the ontology for general medical science

    Get PDF
    Background: Systematized Nomenclature of Medicine—Clinical Terms (SNOMED CT, hereafter abbreviated SCT) is acomprehensive medical terminology used for standardizing the storage, retrieval, and exchange of electronic healthdata. Some efforts have been made to capture the contents of SCT as Web Ontology Language (OWL), but theseefforts have been hampered by the size and complexity of SCT. Method: Our proposal here is to develop an upper-level ontology and to use it as the basis for defining the termsin SCT in a way that will support quality assurance of SCT, for example, by allowing consistency checks ofdefinitions and the identification and elimination of redundancies in the SCT vocabulary. Our proposed upper-levelSCT ontology (SCTO) is based on the Ontology for General Medical Science (OGMS). Results: The SCTO is implemented in OWL 2, to support automatic inference and consistency checking. Theapproach will allow integration of SCT data with data annotated using Open Biomedical Ontologies (OBO) Foundryontologies, since the use of OGMS will ensure consistency with the Basic Formal Ontology, which is the top-levelontology of the OBO Foundry. Currently, the SCTO contains 304 classes, 28 properties, 2400 axioms, and 1555annotations. It is publicly available through the bioportal athttp://bioportal.bioontology.org/ontologies/SCTO/. Conclusion: The resulting ontology can enhance the semantics of clinical decision support systems and semanticinteroperability among distributed electronic health records. In addition, the populated ontology can be used forthe automation of mobile health applications

    Ontologies relevant to behaviour change interventions: a method for their development.

    Get PDF
    Background: Behaviour and behaviour change are integral to many aspects of wellbeing and sustainability. However, reporting behaviour change interventions accurately and synthesising evidence about effective interventions is hindered by lacking a shared, scientific terminology to describe intervention characteristics. Ontologies are standardised frameworks that provide controlled vocabularies to help unify and connect scientific fields. To date, there is no published guidance on the specific methods required to develop ontologies relevant to behaviour change. We report the creation and refinement of a method for developing ontologies that make up the Behaviour Change Intervention Ontology (BCIO). Aims: (1) To describe the development method of the BCIO and explain its rationale; (2) To provide guidance on implementing the activities within the development method. Method and results: The method for developing ontologies relevant to behaviour change interventions was constructed by considering principles of good practice in ontology development and identifying key activities required to follow those principles. The method's details were refined through application to developing two ontologies. The resulting ontology development method involved: (1) defining the ontology's scope; (2) identifying key entities; (3) refining the ontology through an iterative process of literature annotation, discussion and revision; (4) expert stakeholder review; (5) testing inter-rater reliability; (6) specifying relationships between entities, and; (7) disseminating and maintaining the ontology. Guidance is provided for conducting relevant activities for each step.  Conclusions: We have developed a detailed method for creating ontologies relevant to behaviour change interventions, together with practical guidance for each step, reflecting principles of good practice in ontology development. The most novel aspects of the method are the use of formal mechanisms for literature annotation and expert stakeholder review to develop and improve the ontology content. We suggest the mnemonic SELAR3, representing the method's first six steps as Scope, Entities, Literature Annotation, Review, Reliability, Relationships

    Drag it together with Groupie: making RDF data authoring easy and fun for anyone

    No full text
    One of the foremost challenges towards realizing a “Read-write Web of Data” [3] is making it possible for everyday computer users to easily find, manipulate, create, and publish data back to the Web so that it can be made available for others to use. However, many aspects of Linked Data make authoring and manipulation difficult for “normal” (ie non-coder) end-users. First, data can be high-dimensional, having arbitrary many properties per “instance”, and interlinked to arbitrary many other instances in a many different ways. Second, collections of Linked Data tend to be vastly more heterogeneous than in typical structured databases, where instances are kept in uniform collections (e.g., database tables). Third, while highly flexible, the problem of having all structures reduced as a graph is verbosity: even simple structures can appear complex. Finally, many of the concepts involved in linked data authoring - for example, terms used to define ontologies are highly abstract and foreign to regular citizen-users.To counter this complexity we have devised a drag-and-drop direct manipulation interface that makes authoring Linked Data easy, fun, and accessible to a wide audience. Groupie allows users to author data simply by dragging blobs representing entities into other entities to compose relationships, establishing one relational link at a time. Since the underlying representation is RDF, Groupie facilitates the inclusion of references to entities and properties defined elsewhere on the Web through integration with popular Linked Data indexing services. Finally, to make it easy for new users to build upon others’ work, Groupie provides a communal space where all data sets created by users can be shared, cloned and modified, allowing individual users to help each other model complex domains thereby leveraging collective intelligence
    corecore