6 research outputs found

    A Mapping Approach to Convert MTPs into a Capability and Skill Ontology

    Full text link
    Being able to quickly integrate new equipment and functions into an existing plant is a major goal for both discrete and process manufacturing. But currently, these two industry domains use different approaches to achieve this goal. While the Module Type Package (MTP) is getting more and more adapted in practical applications of process manufacturing, so-called skill-based manufacturing approaches are favored in the context of discrete manufacturing. The two approaches are incompatible because their models feature different contents and they use different technologies. This contribution provides a comparison of the MTP with a skill-based approach as well as an automated mapping that can be used to transfer the contents of an MTP into a skill ontology. Through this mapping, an MTP can be semantically lifted in order to apply functions like querying or reasoning. Furthermore, machines that were previously described using two incompatible models can now be used in one production process

    Ontology-based solutions for interoperability among product lifecycle management systems: A systematic literature review

    Get PDF
    During recent years, globalization has had an impact on the competitive capacity of industries, forcing them to integrate their productive processes with other, geographically distributed, facilities. This requires the information systems that support such processes to interoperate. Significant attention has been paid to the development of ontology-based solutions, which are meant to tackle issues from inconsistency to semantic interoperability and knowledge reusability. This paper looks into how the available technology, models and ontology-based solutions might interact within the manufacturing industry environment to achieve semantic interoperability among industrial information systems. Through a systematic literature review, this paper has aimed to identify the most relevant elements to consider in the development of an ontology-based solution and how these solutions are being deployed in industry. The research analyzed 54 studies in alignment with the specific requirements of our research questions. The most relevant results show that ontology-based solutions can be set up using OWL as the ontology language, Protégé as the ontology modeling tool, Jena as the application programming interface to interact with the built ontology, and different standards from the International Organization for Standardization Technical Committee 184, Subcommittee 4 or 5, to get the foundational concepts, axioms, and relationships to develop the knowledge base. We believe that the findings of this study make an important contribution to practitioners and researchers as they provide useful information about different projects and choices involved in undertaking projects in the field of industrial ontology application.Fil: Fraga, Alvaro Luis. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; ArgentinaFil: Vegetti, Maria Marcela. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; ArgentinaFil: Leone, Horacio Pascual. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; Argentin

    Methods for Semantic Interoperability in AutomationML-based Engineering

    Get PDF
    Industrial engineering is an interdisciplinary activity that involves human experts from various technical backgrounds working with different engineering tools. In the era of digitization, the engineering process generates a vast amount of data. To store and exchange such data, dedicated international standards are developed, including the XML-based data format AutomationML (AML). While AML provides a harmonized syntax among engineering tools, the semantics of engineering data remains highly heterogeneous. More specifically, the AML models of the same domain or entity can vary dramatically among different tools that give rise to the so-called semantic interoperability problem. In practice, manual implementation is often required for the correct data interpretation, which is usually limited in reusability. Efforts have been made for tackling the semantic interoperability problem. One mainstream research direction has been focused on the semantic lifting of engineering data using Semantic Web technologies. However, current results in this field lack the study of building complex domain knowledge that requires a profound understanding of the domain and sufficient skills in ontology building. This thesis contributes to this research field in two aspects. First, machine learning algorithms are developed for deriving complex ontological concepts from engineering data. The induced concepts encode the relations between primitive ones and bridge the semantic gap between engineering tools. Second, to involve domain experts more tightly into the process of ontology building, this thesis proposes the AML concept model (ACM) for representing ontological concepts in a native AML syntax, i.e., providing an AML-frontend for the formal ontological semantics. ACM supports the bidirectional information flow between the user and the learner, based on which the interactive machine learning framework AMLLEARNER is developed. Another rapidly growing research field devotes to develop methods and systems for facilitating data access and exchange based on database theories and techniques. In particular, the so-called Query By Example (QBE) allows the user to construct queries using data examples. This thesis adopts the idea of QBE in AML-based engineering by introducing the AML Query Template (AQT). The design of AQT has been focused on a native AML syntax, which allows constructing queries with conventional AML tools. This thesis studies the theoretical foundation of AQT and presents algorithms for the automated generation of query programs. Comprehensive requirement analysis shows that the proposed approach can solve the problem of semantic interoperability in AutomationML-based engineering to a great extent

    A Knowledge Graph Based Integration Approach for Industry 4.0

    Get PDF
    The fourth industrial revolution, Industry 4.0 (I40) aims at creating smart factories employing among others Cyber-Physical Systems (CPS), Internet of Things (IoT) and Artificial Intelligence (AI). Realizing smart factories according to the I40 vision requires intelligent human-to-machine and machine-to-machine communication. To achieve this communication, CPS along with their data need to be described and interoperability conflicts arising from various representations need to be resolved. For establishing interoperability, industry communities have created standards and standardization frameworks. Standards describe main properties of entities, systems, and processes, as well as interactions among them. Standardization frameworks classify, align, and integrate industrial standards according to their purposes and features. Despite being published by official international organizations, different standards may contain divergent definitions for similar entities. Further, when utilizing the same standard for the design of a CPS, different views can generate interoperability conflicts. Albeit expressive, standardization frameworks may represent divergent categorizations of the same standard to some extent, interoperability conflicts need to be resolved to support effective and efficient communication in smart factories. To achieve interoperability, data need to be semantically integrated and existing conflicts conciliated. This problem has been extensively studied in the literature. Obtained results can be applied to general integration problems. However, current approaches fail to consider specific interoperability conflicts that occur between entities in I40 scenarios. In this thesis, we tackle the problem of semantic data integration in I40 scenarios. A knowledge graphbased approach allowing for the integration of entities in I40 while considering their semantics is presented. To achieve this integration, there are challenges to be addressed on different conceptual levels. Firstly, defining mappings between standards and standardization frameworks; secondly, representing knowledge of entities in I40 scenarios described by standards; thirdly, integrating perspectives of CPS design while solving semantic heterogeneity issues; and finally, determining real industry applications for the presented approach. We first devise a knowledge-driven approach allowing for the integration of standards and standardization frameworks into an Industry 4.0 knowledge graph (I40KG). The standards ontology is used for representing the main properties of standards and standardization frameworks, as well as relationships among them. The I40KG permits to integrate standards and standardization frameworks while solving specific semantic heterogeneity conflicts in the domain. Further, we semantically describe standards in knowledge graphs. To this end, standards of core importance for I40 scenarios are considered, i.e., the Reference Architectural Model for I40 (RAMI4.0), AutomationML, and the Supply Chain Operation Reference Model (SCOR). In addition, different perspectives of entities describing CPS are integrated into the knowledge graphs. To evaluate the proposed methods, we rely on empirical evaluations as well as on the development of concrete use cases. The attained results provide evidence that a knowledge graph approach enables the effective data integration of entities in I40 scenarios while solving semantic interoperability conflicts, thus empowering the communication in smart factories

    Dealing with inconsistent and incomplete data in a semantic technology setting

    Get PDF
    Semantic and traditional databases are vulnerable to Inconsistent or Incomplete Data (IID). A data set stored in a traditional or semantic database is queried to retrieve record(s) in a tabular format. Such retrieved records can consist of many rows where each row contains an object and the associated fields (columns). However, a large set of records retrieved from a noisy data set may be wrongly analysed. For example, a data analyst may ascribe inconsistent data as consistent or incomplete data as complete where he did not identify the inconsistency or incompleteness in the data. Analysis on a large set of data can be undermined by the presence of IID in that data set. Reliance as a result is placed on the data analyst to identify and visualise the IID in the data set. The IID issues are heightened in open world assumptions as evident in semantic or Resource Description Framework (RDF) databases. Unlike the closed world assumption in traditional databases where data are assumed to be complete with its own issues, in the open world assumption the data might be assumed to be unknown and IID has to be tolerated at the outset. Formal Concept Analysis (FCA) can be used to deal with IID in such databases. That is because FCA is a mathematical method that uses a lattice structure to reveal the associations among objects and attributes in a data set. The existing FCA approaches that can be used in dealing with IID in RDF databases include fault tolerance, Dau's approach, and CUBIST approaches. The new FCA approaches include association rules, semi-automated and automated methods in FcaBedrock. These new FCA approaches were developed in the course of this study. To underpin this work, a series of empirical studies were carried out based on the single case study methodology. The case study, namely the Edinburgh Mouse Atlas Gene Expression Database (EMAGE) provided the real-life context according to that methodology. The existing and the new FCA approaches were used in identifying and visualising the IID in the EMAGE RDF data set. The empirical studies revealed that the existing approaches used in dealing with IID in EMAGE are tedious and do not allow the IID to be easily visualised in the database. It also revealed that existing FCA approaches for dealing with IID do not exclusively visualise the IID in a data set. This is unlike the new FCA approaches, notably the semi-automated and automated FcaBedrock that can separate out and thus exclusively visualise IID in objects associated with the many value attributes that characterise such data sets. The exclusive visualisation of IID in a data set enables the data analyst to identify holistically the IID in his or her investigated data set thereby avoiding mistaken conclusions. The aim was to discover how effective each FCA approach is in identifying and visualising IID, answering the research question: "How can FCA tools and techniques be used in identifying and visualising IID in RDF data?" The automated FcaBedrock approach emerged to be the best means for visually identifying IID in an RDF data set. The CUBIST approaches and the semi-automated approach were ranked as 2nd and 3rd, respectively, whilst Dau's approach ranked as 4th. Whilst the subject of IID in a semantic technology setting could be explored further, it can be concluded that the automated FcaBedrock approach best identifies and visualises the IID in an RDF thus semantic data set

    From Data Modeling to Knowledge Engineering in Space System Design

    Get PDF
    The technologies currently employed for modeling complex systems, such as aircraft, spacecraft, or infrastructures, are sufficient for system description, but do not allow deriving knowledge about the modeled systems. This work provides the means to describe space systems in a way that allows automating activities such as deriving knowledge about critical parts of the system’s design, evaluation of test success, and identification of single points of failure
    corecore