25 research outputs found

    Ontology as Product-Service System: Lessons Learned from GO, BFO and DOLCE

    Get PDF
    This paper defends a view of the Gene Ontology (GO) and of Basic Formal Ontology (BFO) as examples of what the manufacturing industry calls product-service systems. This means that they are products (the ontologies) bundled with a range of ontology services such as updates, training, help desk, and permanent identifiers. The paper argues that GO and BFO are contrasted in this respect with DOLCE, which approximates more closely to a scientific theory or a scientific publication. The paper provides a detailed overview of ontology services and concludes with a discussion of some implications of the product-service system approach for the understanding of the nature of applied ontology. Ontology developer communities are compared in this respect with developers of scientific theories and of standards (such as W3C). For each of these we can ask: what kinds of products do they develop and what kinds of services do they provide for the users of these products

    The Industrial Ontologies Foundry (IOF) perspectives

    Get PDF
    In recent years there has been a number of promising technical and institutional developments regarding use of ontologies in industry. At the same time, however, most industrial ontology development work remains within the realm of academic research and is without significant uptake in commercial applications. In biomedicine, by contrast, ontologies have made significant inroads as valuable tools for achieving interoperability between data systems whose contents derive from widely heterogeneous sources. In this position paper, we present a set of principles learned from the successful Open Biomedical Ontologies (OBO) Foundry initiative to guide the design and development of the Industrial Ontologies Foundry (IOF), which is a counterpart to the OBO Foundry initiative for the manufacturing industry. We also illustrate the potential utility of these principles by sketching the conceptual design of a framework for sustainable IOF development

    A black art: Ontology, data, and the Tower of Babel problem

    Get PDF
    Computational ontologies are a new type of emerging scientific media (Smith, 2016) that process large quantities of heterogeneous data about portions of reality. Applied computational ontologies are used for semantically integrating (Heiler, 1995; Pileggi & Fernandez-Llatas, 2012) divergent data to represent reality and in so doing applied computational ontologies alter conceptions of materiality and produce new realities based on levels of informational granularity and abstraction (Floridi, 2011), resulting in a new type of informational ontology (Iliadis, 2013) the critical analysis of which requires new methods and frameworks. Currently, there is a lack of literature addressing the theoretical, social, and critical dimensions of such informational ontologies, applied computational ontologies, and the interdisciplinary communities of practice (Brown & Duguid, 1991; Wenger, 1998) that produce them. This dissertation fills a lacuna in communicative work in an emerging subfield of Science and Technology Studies (Latour & Woolgar, 1979) known as Critical Data Studies (boyd & Crawford, 2012; Dalton & Thatcher, 2014; Kitchin & Lauriault, 2014) by adopting a critical framework to analyze the systems of thought that inform applied computational ontology while offering insight into its realism-based methods and philosophical frameworks to gauge their ethical import. Since the early 1990s, computational ontologies have been used to organize massive amounts of heterogeneous data by individuating reality into computable parts, attributes, and relations. This dissertation provides a theory of computational ontologies as technologies of individuation (Simondon, 2005) that translate disparate data to produce informational cohesion. By technologies of individuation I mean engineered artifacts whose purpose is to partition portions of reality into computable informational objects. I argue that data are metastable entities and that computational ontologies restrain heterogeneous data via a process of translation to produce semantic interoperability. In this way, I show that computational ontologies effectively re-ontologize (Floridi, 2013) and produce reality and thus that have ethical consequences, specifically in terms of their application to social reality and social ontology (Searle, 2006). I use the Basic Formal Ontology (Arp, Smith, & Spear, 2015)—the world’s most widely used upper-level ontology—as a case study and analyze its methods and ensuing ethical issues concerning its social application in the Military Ontology before recommending an ethical framework. “Ontology” is a term that is used in philosophy and computer science in related but different ways—philosophical ontology typically concerns metaphysics while computational ontology typically concerns databases. This dissertation provides a critical history and theory of ontology and the interdisciplinary teams of researchers that came to adopt methods from philosophical ontology to build, persuade, and reason with applied computational ontology. Following a critical communication approach, I define applied computational ontology construction as a solution to a communication problem among scientists who seek to create semantic interoperability among data and argue that applied ontology is philosophical, informational in nature, and communicatively constituted (McPhee & Zaug, 2000). The primary aim is to explain how philosophy informs applied computational ontology while showing how such ontologies became instantiated in material organizations, how to study them, and describe their ethical implications

    Contributions for the exploitation of Semantic Technologies in Industry 4.0

    Get PDF
    120 p.En este trabajo de investigación se promueve la utilización de las tecnologías semánticas, en el entorno de la Industria 4.0, a través de tres contribuciones enfocadas en temas correspondientes a la fabricación inteligente: las descripciones enriquecidas de componentes, la visualización y el análisis de los datos, y la implementación de la Industria 4.0 en PyMEs.La primera contribución es una ontología llamada ExtruOnt, la cual contiene descripciones semánticas de un tipo de máquina de fabricación (la extrusora). En esta ontología se describen los componentes, sus conexiones espaciales, sus características, sus representaciones en tres dimensiones y, finalmente, los sensores utilizados para capturar los datos. La segunda contribución corresponde a un sistema de consulta visual en el cual se utiliza la ontología ExtruOnt y una representación en 2D de la extrusora para facilitar a los expertos de dominio la visualización y la extracción de conocimiento sobre el proceso de fabricación de una manera rápida y sencilla. La tercera contribución consiste en una metodología para la implementación de la Industria 4.0 en PyMEs, orientada al ciclo de vida del cliente y potenciada por el uso de tecnologías Semánticas y tecnologías de renderizado 3D.Las contribuciones han sido desarrolladas, aplicadas y validadas bajo un escenario de fabricación real

    A Knowledge Graph Based Integration Approach for Industry 4.0

    Get PDF
    The fourth industrial revolution, Industry 4.0 (I40) aims at creating smart factories employing among others Cyber-Physical Systems (CPS), Internet of Things (IoT) and Artificial Intelligence (AI). Realizing smart factories according to the I40 vision requires intelligent human-to-machine and machine-to-machine communication. To achieve this communication, CPS along with their data need to be described and interoperability conflicts arising from various representations need to be resolved. For establishing interoperability, industry communities have created standards and standardization frameworks. Standards describe main properties of entities, systems, and processes, as well as interactions among them. Standardization frameworks classify, align, and integrate industrial standards according to their purposes and features. Despite being published by official international organizations, different standards may contain divergent definitions for similar entities. Further, when utilizing the same standard for the design of a CPS, different views can generate interoperability conflicts. Albeit expressive, standardization frameworks may represent divergent categorizations of the same standard to some extent, interoperability conflicts need to be resolved to support effective and efficient communication in smart factories. To achieve interoperability, data need to be semantically integrated and existing conflicts conciliated. This problem has been extensively studied in the literature. Obtained results can be applied to general integration problems. However, current approaches fail to consider specific interoperability conflicts that occur between entities in I40 scenarios. In this thesis, we tackle the problem of semantic data integration in I40 scenarios. A knowledge graphbased approach allowing for the integration of entities in I40 while considering their semantics is presented. To achieve this integration, there are challenges to be addressed on different conceptual levels. Firstly, defining mappings between standards and standardization frameworks; secondly, representing knowledge of entities in I40 scenarios described by standards; thirdly, integrating perspectives of CPS design while solving semantic heterogeneity issues; and finally, determining real industry applications for the presented approach. We first devise a knowledge-driven approach allowing for the integration of standards and standardization frameworks into an Industry 4.0 knowledge graph (I40KG). The standards ontology is used for representing the main properties of standards and standardization frameworks, as well as relationships among them. The I40KG permits to integrate standards and standardization frameworks while solving specific semantic heterogeneity conflicts in the domain. Further, we semantically describe standards in knowledge graphs. To this end, standards of core importance for I40 scenarios are considered, i.e., the Reference Architectural Model for I40 (RAMI4.0), AutomationML, and the Supply Chain Operation Reference Model (SCOR). In addition, different perspectives of entities describing CPS are integrated into the knowledge graphs. To evaluate the proposed methods, we rely on empirical evaluations as well as on the development of concrete use cases. The attained results provide evidence that a knowledge graph approach enables the effective data integration of entities in I40 scenarios while solving semantic interoperability conflicts, thus empowering the communication in smart factories

    Towards semantics-driven modelling and simulation of context-aware manufacturing systems

    Get PDF
    Systems modelling and simulation are two important facets for thoroughly and effectively analysing manufacturing processes. The ever-growing complexity of the latter, the increasing amount of knowledge, and the use of Semantic Web techniques adhering meaning to data have led researchers to explore and combine together methodologies by exploiting their best features with the purpose of supporting manufacturing system's modelling and simulation applications. In the past two decades, the use of ontologies has proven to be highly effective for context modelling and knowledge management. Nevertheless, they are not meant for any kind of model simulations. The latter, instead, can be achieved by using a well-known workflow-oriented mathematical modelling language such as Petri Net (PN), which brings in modelling and analytical features suitable for creating a digital copy of an industrial system (also known as "digital twin"). The theoretical framework presented in this dissertation aims to exploit W3C standards, such as Semantic Web Rule Language (SWRL) and Web Ontology Language (OWL), to transform each piece of knowledge regarding a manufacturing system into Petri Net modelling primitives. In so doing, it supports the semantics-driven instantiation, analysis and simulation of what we call semantically-enriched PN-based manufacturing system digital twins. The approach proposed by this exploratory research is therefore based on the exploitation of the best features introduced by state-of-the-art developments in W3C standards for Linked Data, such as OWL and SWRL, together with a multipurpose graphical and mathematical modelling tool known as Petri Net. The former is used for gathering, classifying and properly storing industrial data and therefore enhances our PN-based digital copy of an industrial system with advanced reasoning features. This makes both the system modelling and analysis phases more effective and, above all, paves the way towards a completely new field, where semantically-enriched PN-based manufacturing system digital twins represent one of the drivers of the digital transformation already in place in all companies facing the industrial revolution. As a result, it has been possible to outline a list of indications that will help future efforts in the application of complex digital twin support oriented solutions, which in turn is based on semantically-enriched manufacturing information systems. Through the application cases, five key topics have been tackled, namely: (i) semantic enrichment of industrial data using the most recent ontological models in order to enhance its value and enable new uses; (ii) context-awareness, or context-adaptiveness, aiming to enable the system to capture and use information about the context of operations; (iii) reusability, which is a core concept through which we want to emphasize the importance of reusing existing assets in some form within the industrial modelling process, such as industrial process knowledge, process data, system modelling primitives, and the like; (iv) the ultimate goal of semantic Interoperability, which can be accomplished by adding data about the metadata, linking each data element to a controlled, shared vocabulary; finally, (v) the impact on modelling and simulation applications, which shows how we could automate the translation process of industrial knowledge into a digital manufacturing system and empower it with quantitative and qualitative analytical technics

    An integration framework for managing rich organisational process knowledge

    Get PDF
    The problem we have addressed in this dissertation is that of designing a pragmatic framework for integrating the synthesis and management of organisational process knowledge which is based on domain-independent AI planning and plan representations. Our solution has focused on a set of framework components which provide methods, tools and representations to accomplish this task.In the framework we address a lifecycle of this knowledge which begins with a methodological approach to acquiring information about the process domain. We show that this initial domain specification can be translated into a common constraint-based model of activity (based on the work of Tate, 1996c and 1996d) which can then be operationalised for use in an AI planner. This model of activity is ontologically underpinned and may be expressed with a flexible and extensible language based on a sorted first-order logic. The model combines perspectives covering both the space of behaviour as well as the space of decisions. Synthesised or modified processes/plans can be translated to and from the common representation in order to support knowledge sharing, visualisation and mixed-initiative interaction.This work united past and present Edinburgh research on planning and infused it with perspectives from design rationale, requirements engineering, and process knowledge sharing. The implementation has been applied to a portfolio of scenarios which include process examples from business, manufacturing, construction and military operations. An archive of this work is available at: http://www.aiai.ed.ac.uk/~oplan/cpf

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore