15 research outputs found

    A Knowledge Graph for Industry 4.0

    Get PDF
    One of the most crucial tasks for today’s knowledge workers is to get and retain a thorough overview on the latest state of the art. Especially in dynamic and evolving domains, the amount of relevant sources is constantly increasing, updating and overruling previous methods and approaches. For instance, the digital transformation of manufacturing systems, called Industry 4.0, currently faces an overwhelming amount of standardization efforts and reference initiatives, resulting in a sophisticated information environment. We propose a structured dataset in the form of a semantically annotated knowledge graph for Industry 4.0 related standards, norms and reference frameworks. The graph provides a Linked Data-conform collection of annotated, classified reference guidelines supporting newcomers and experts alike in understanding how to implement Industry 4.0 systems. We illustrate the suitability of the graph for various use cases, its already existing applications, present the maintenance process and evaluate its quality

    Data ecosystems for the sustainability transformation : a study commissioned by Huawei Technologies Deutschland GmbH

    Get PDF
    In the coming years, we must set a course that will allow as to protect our climate, reduce resource consumption, and preserve biodiversity. A profound ecological system change is on the horizon in all central areas of action of the economy and society, or transformation arenas. Digitalisation is a prerequisite for the success in this change and will impact these arenas at multiple levels: Digital technologies and applications will make it possible to improve current procedures, processes, and structures (Improve) and help us take the first steps towards new business models and frameworks (Convert). Despite this, digitalisation itself must be effective enough to facilitate a complete ecological restructuring of our society and lives to achieve more far-reaching economic transformation and value creation (Transform). The ability to obtain, link, and use data is a basic prerequisite for tapping into the potential of digitisation for sustainability transformation. However, data is not a homogeneous raw material. Data only gains value when we know the context in which it was collected and when we can use it for a specific purpose. The discussion on what structures and prerequisites are necessary for the system-changing use of data has only just begun. This study was conducted to serve as a starting point for this discussion as it describes the opportunities and prerequisites for a data-based sustainability transformation. This study focuses on environmental data, data from plants, machines, infrastructure, and IoT products. Our task will be to increase the use this data for systemic solutions (system innovation) within transformation arenas where different stakeholders are working together to initiate infrastructure, value chain, and business model transformation

    A Knowledge Graph Based Approach to Social Science Surveys

    Get PDF
    Recent success of knowledge graphs has spurred interest in applying them in open science, such as on intelligent survey systems for scientists. However, efforts to understand the quality of candidate survey questions provided by these methods have been limited. Indeed, existing methods do not consider the type of on-the-fly content planning that is possible for face-to-face surveys and hence do not guarantee that selection of subsequent questions is based on response to previous questions in a survey. To address this limitation, we propose a dynamic and informative solution for an intelligent survey system that is based on knowledge graphs. To illustrate our proposal, we look into social science surveys, focusing on ordering the questions of a questionnaire component by their level of acceptance, along with conditional triggers that further customise participants' experience. Our main findings are: (i) evaluation of the proposed approach shows that the dynamic component can be beneficial in terms of lowering the number of questions asked per variable, thus allowing more informative data to be collected in a survey of equivalent length; and (ii) a primary advantage of the proposed approach is that it enables grouping of participants according to their responses, so that participants are not only served appropriate follow-up questions, but their responses to these questions may be analysed in the context of some initial categorisation. We believe that the proposed approach can easily be applied to other social science surveys based on grouping definitions in their contexts. The knowledge-graph-based intelligent survey approach proposed in our work allows online questionnaires to approach face-to-face interaction in their level of informativity and responsiveness, as well as duplicating certain advantages of interview-based data collection

    Matching Weak Informative Ontologies

    Full text link
    Most existing ontology matching methods utilize the literal information to discover alignments. However, some literal information in ontologies may be opaque and some ontologies may not have sufficient literal information. In this paper, these ontologies are named as weak informative ontologies (WIOs) and it is challenging for existing methods to matching WIOs. On one hand, string-based and linguistic-based matching methods cannot work well for WIOs. On the other hand, some matching methods use external resources to improve their performance, but collecting and processing external resources is still time-consuming. To address this issue, this paper proposes a practical method for matching WIOs by employing the ontology structure information to discover alignments. First, the semantic subgraphs are extracted from the ontology graph to capture the precise meanings of ontology elements. Then, a new similarity propagation model is designed for matching WIOs. Meanwhile, in order to avoid meaningless propagation, the similarity propagation is constrained by semantic subgraphs and other conditions. Consequently, the similarity propagation model ensures a balance between efficiency and quality during matching. Finally, the similarity propagation model uses a few credible alignments as seeds to find more alignments, and some useful strategies are adopted to improve the performance. This matching method for WIOs has been implemented in the ontology matching system Lily. Experimental results on public OAEI benchmark datasets demonstrate that Lily significantly outperforms most of the state-of-the-art works in both WIO matching tasks and general ontology matching tasks. In particular, Lily increases the recall by a large margin, while it still obtains high precision of matching results

    Academia/Industry DynAmics (AIDA): A knowledge Graph within the scholarly domain and its applications

    Get PDF
    Scholarly knowledge graphs are a form of knowledge representation that aims to capture and organize the information and knowledge contained in scholarly publications, such as research papers, books, patents, and datasets. Scholarly knowledge graphs can provide a comprehensive and structured view of the scholarly domain, covering various aspects such as authors, affiliations, research topics, methods, results, citations, and impact. Scholarly knowledge graphs can enable various applications and services that can facilitate and enhance scholarly communication, such as information retrieval, data analysis, recommendation systems, semantic search, and knowledge discovery. However, constructing and maintaining scholarly knowledge graphs is a challenging task that requires dealing with large-scale, heterogeneous, and dynamic data sources. Moreover, extracting and integrating the relevant information and knowledge from unstructured or semi-structured text is not trivial, as it involves natural language processing, machine learning, ontology engineering, and semantic web technologies. Furthermore, ensuring the quality and validity of the scholarly knowledge graphs is essential for their usability and reliability

    A Semantic Interoperability Model Based on the IEEE 1451 Family of Standards Applied to the Industry 4.0

    Get PDF
    The Internet of Things (IoT) has been growing recently. It is a concept for connecting billions of smart devices through the Internet in different scenarios. One area being developed inside the IoT in industrial automation, which covers Machine-to-Machine (M2M) and industrial communications with an automatic process, emerging the Industrial Internet of Things (IIoT) concept. Inside the IIoT is developing the concept of Industry 4.0 (I4.0). That represents the fourth industrial revolution and addresses the use of Internet technologies to improve the production efficiency of intelligent services in smart factories. I4.0 is composed of a combination of objects from the physical world and the digital world that offers dedicated functionality and flexibility inside and outside of an I4.0 network. The I4.0 is composed mainly of Cyber-Physical Systems (CPS). The CPS is the integration of the physical world and its digital world, i.e., the Digital Twin (DT). It is responsible for realising the intelligent cross-link application, which operates in a self-organised and decentralised manner, used by smart factories for value creation. An area where the CPS can be implemented in manufacturing production is developing the Cyber-Physical Production System (CPPS) concept. CPPS is the implementation of Industry 4.0 and CPS in manufacturing and production, crossing all levels of production between the autonomous and cooperative elements and sub-systems. It is responsible for connecting the virtual space with the physical world, allowing the smart factories to be more intelligent, resulting in better and smart production conditions, increasing productivity, production efficiency, and product quality. The big issue is connecting smart devices with different standards and protocols. About 40% of the benefits of the IoT cannot be achieved without interoperability. This thesis is focused on promoting the interoperability of smart devices (sensors and actuators) inside the IIoT under the I4.0 context. The IEEE 1451 is a family of standards developed to manage transducers. This standard reaches the syntactic level of interoperability inside Industry 4.0. However, Industry 4.0 requires a semantic level of communication not to exchange data ambiguously. A new semantic layer is proposed in this thesis allowing the IEEE 1451 standard to be a complete framework for communication inside the Industry 4.0 to provide an interoperable network interface with users and applications to collect and share the data from the industry field.A Internet das Coisas tem vindo a crescer recentemente. É um conceito que permite conectar bilhões de dispositivos inteligentes através da Internet em diferentes cenários. Uma área que está sendo desenvolvida dentro da Internet das Coisas é a automação industrial, que abrange a comunicação máquina com máquina no processo industrial de forma automática. Essa interligação, representa o conceito da Internet das Coisas Industrial. Dentro da Internet das Coisas Industrial está a desenvolver o conceito de Indústria 4.0 (I4.0). Isso representa a quarta revolução industrial que aborda o uso de tecnologias utilizadas na Internet para melhorar a eficiência da produção de serviços em fábricas inteligentes. A Indústria 4.0 é composta por uma combinação de objetos do mundo físico e do mundo da digital que oferece funcionalidade dedicada e flexibilidade dentro e fora de uma rede da Indústria 4.0. O I4.0 é composto principalmente por Sistemas Ciberfísicos. Os Sistemas Ciberfísicos permitem a integração do mundo físico com seu representante no mundo digital, por meio do Gémeo Digital. Sistemas Ciberfísicos são responsáveis por realizar a aplicação inteligente da ligação cruzada, que opera de forma auto-organizada e descentralizada, utilizada por fábricas inteligentes para criação de valor. Uma área em que o Sistema Ciberfísicos pode ser implementado na produção manufatureira, isso representa o desenvolvimento do conceito Sistemas de Produção Ciberfísicos. Esse sistema é a implementação da Indústria 4.0 e Sistema Ciberfísicos na fabricação e produção. A cruzar todos os níveis desde a produção entre os elementos e subsistemas autónomos e cooperativos. Ele é responsável por conectar o espaço virtual com o mundo físico, permitindo que as fábricas inteligentes sejam mais inteligentes, resultando em condições de produção melhores e inteligentes, aumentando a produtividade, a eficiência da produção e a qualidade do produto. A grande questão é como conectar dispositivos inteligentes com diferentes normas e protocolos. Cerca de 40% dos benefícios da Internet das Coisas não podem ser alcançados sem interoperabilidade. Esta tese está focada em promover a interoperabilidade de dispositivos inteligentes (sensores e atuadores) dentro da Internet das Coisas Industrial no contexto da Indústria 4.0. O IEEE 1451 é uma família de normas desenvolvidos para gerenciar transdutores. Esta norma alcança o nível sintático de interoperabilidade dentro de uma indústria 4.0. No entanto, a Indústria 4.0 requer um nível semântico de comunicação para não haver a trocar dados de forma ambígua. Uma nova camada semântica é proposta nesta tese permitindo que a família de normas IEEE 1451 seja um framework completo para comunicação dentro da Indústria 4.0. Permitindo fornecer uma interface de rede interoperável com utilizadores e aplicações para recolher e compartilhar os dados dentro de um ambiente industrial.This thesis was developed at the Measurement and Instrumentation Laboratory (IML) in the University of Beira Interior and supported by the portuguese project INDTECH 4.0 – Novas tecnologias para fabricação, que tem como objetivo geral a conceção e desenvolvimento de tecnologias inovadoras no contexto da Indústria 4.0/Factories of the Future (FoF), under the number POCI-01-0247-FEDER-026653

    Entities with quantities : extraction, search, and ranking

    Get PDF
    Quantities are more than numeric values. They denote measures of the world’s entities such as heights of buildings, running times of athletes, energy efficiency of car models or energy production of power plants, all expressed in numbers with associated units. Entity-centric search and question answering (QA) are well supported by modern search engines. However, they do not work well when the queries involve quantity filters, such as searching for athletes who ran 200m under 20 seconds or companies with quarterly revenue above $2 Billion. State-of-the-art systems fail to understand the quantities, including the condition (less than, above, etc.), the unit of interest (seconds, dollar, etc.), and the context of the quantity (200m race, quarterly revenue, etc.). QA systems based on structured knowledge bases (KBs) also fail as quantities are poorly covered by state-of-the-art KBs. In this dissertation, we developed new methods to advance the state-of-the-art on quantity knowledge extraction and search.Zahlen sind mehr als nur numerische Werte. Sie beschreiben Maße von Entitäten wie die Höhe von Gebäuden, die Laufzeit von Sportlern, die Energieeffizienz von Automodellen oder die Energieerzeugung von Kraftwerken - jeweils ausgedrückt durch Zahlen mit zugehörigen Einheiten. Entitätszentriete Anfragen und direktes Question-Answering werden von Suchmaschinen häufig gut unterstützt. Sie funktionieren jedoch nicht gut, wenn die Fragen Zahlenfilter beinhalten, wie z. B. die Suche nach Sportlern, die 200m unter 20 Sekunden gelaufen sind, oder nach Unternehmen mit einem Quartalsumsatz von über 2 Milliarden US-Dollar. Selbst moderne Systeme schaffen es nicht, Quantitäten, einschließlich der genannten Bedingungen (weniger als, über, etc.), der Maßeinheiten (Sekunden, Dollar, etc.) und des Kontexts (200-Meter-Rennen, Quartalsumsatz usw.), zu verstehen. Auch QA-Systeme, die auf strukturierten Wissensbanken (“Knowledge Bases”, KBs) aufgebaut sind, versagen, da quantitative Eigenschaften von modernen KBs kaum erfasst werden. In dieser Dissertation werden neue Methoden entwickelt, um den Stand der Technik zur Wissensextraktion und -suche von Quantitäten voranzutreiben. Unsere Hauptbeiträge sind die folgenden: • Zunächst präsentieren wir Qsearch [Ho et al., 2019, Ho et al., 2020] – ein System, das mit erweiterten Fragen mit Quantitätsfiltern umgehen kann, indem es Hinweise verwendet, die sowohl in der Frage als auch in den Textquellen vorhanden sind. Qsearch umfasst zwei Hauptbeiträge. Der erste Beitrag ist ein tiefes neuronales Netzwerkmodell, das für die Extraktion quantitätszentrierter Tupel aus Textquellen entwickelt wurde. Der zweite Beitrag ist ein neuartiges Query-Matching-Modell zum Finden und zur Reihung passender Tupel. • Zweitens, um beim Vorgang heterogene Tabellen einzubinden, stellen wir QuTE [Ho et al., 2021a, Ho et al., 2021b] vor – ein System zum Extrahieren von Quantitätsinformationen aus Webquellen, insbesondere Ad-hoc Webtabellen in HTML-Seiten. Der Beitrag von QuTE umfasst eine Methode zur Verknüpfung von Quantitäts- und Entitätsspalten, für die externe Textquellen genutzt werden. Zur Beantwortung von Fragen kontextualisieren wir die extrahierten Entitäts-Quantitäts-Paare mit informativen Hinweisen aus der Tabelle und stellen eine neue Methode zur Konsolidierung und verbesserteer Reihung von Antwortkandidaten durch Inter-Fakten-Konsistenz vor. • Drittens stellen wir QL [Ho et al., 2022] vor – eine Recall-orientierte Methode zur Anreicherung von Knowledge Bases (KBs) mit quantitativen Fakten. Moderne KBs wie Wikidata oder YAGO decken viele Entitäten und ihre relevanten Informationen ab, übersehen aber oft wichtige quantitative Eigenschaften. QL ist frage-gesteuert und basiert auf iterativem Lernen mit zwei Hauptbeiträgen, um die KB-Abdeckung zu verbessern. Der erste Beitrag ist eine Methode zur Expansion von Fragen, um einen größeren Pool an Faktenkandidaten zu erfassen. Der zweite Beitrag ist eine Technik zur Selbstkonsistenz durch Berücksichtigung der Werteverteilungen von Quantitäten

    Datenökosysteme für die Nachhaltigkeitstransformation : eine Studie im Auftrag von Huawei Technologies Deutschland GmbH

    Get PDF
    In den nächsten Jahren müssen die Weichen für Klimaschutz, zur Reduktion des Ressourcenverbrauchs sowie der Erhaltung der Artenvielfalt gestellt werden. In allen zentralen Handlungsbereichen von Wirtschaft und Gesellschaft - den sogenannten Transformationsarenen - steht ein tiefgreifender ökologischer Systemwandel an. Digitalisierung ist eine Erfolgsvoraussetzung für diesen Wandel und wirkt auf verschiedenen Ebenen: digitale Technologien und Anwendungen ermöglichen, gegenwärtige Verfahren, Prozesse und Strukturen zu verbessern (Improve) oder erste Schritte in eine neue Ausrichtung von Geschäftsmodellen oder Rahmenbedingungen zu gehen (Convert). Gleichzeitig muss die Digitalisierung aber auch für einen weitergehenden Umbau von Wirtschaft und Wertschöpfung sowie für die ökologische Neuorientierung von Gesellschaft und Lebensstilen wirksam werden (Transform). Die Fähigkeit zur Gewinnung, Verknüpfung und Nutzung von Daten ist eine Grundvoraussetzung, um die Potenziale der Digitalisierung für die Nachhaltigkeitstransformation zu erschließen. Daten sind dabei jedoch kein homogener Rohstoff - Daten erlangen erst einen Wert, wenn der Kontext, in welchem sie erhoben wurden, bekannt ist und sie für den angestrebten Zweck nutzbar gemacht werden können. Die Diskussion darüber, welche Strukturen und Voraussetzungen für die systemverändernde Nutzung von Daten erforderlich sind, hat erst begonnen. Die vorliegende Studie leistet hierzu einen ersten Beitrag und beschreibt die Möglichkeiten und Voraussetzungen für eine datenbasierte Nachhaltigkeitstransformation. Der Schwerpunkt liegt dabei auf Umweltdaten, Daten von Anlagen, Maschinen, Infrastrukturen oder von Produkten im Internet der Dinge (Internet of Things). Die Aufgabe ist, diese Daten stärker als bisher für systemische Lösungsansätze (Systeminnovationen) in den jeweiligen Transformationsarenen einzusetzen, bei denen unterschiedliche Stakeholder zusammenarbeiten und gemeinsam den Umbau von Infrastrukturen, Wertschöpfungsketten und Geschäftsmodellen einleiten

    A Framework for Interoperability Between Models with Hybrid Tools

    Get PDF
    Complex system development and maintenance face the challenge of dealing with different types of models due to language affordances, preferences, sizes, and so forth that involve interaction between users with different levels of proficiency. Current conceptual data modelling tools do not fully support these modes of working. It requires that the interaction between multiple models in multiple languages is clearly specified to ensure they keep their intended semantics, which is lacking in extant tools. The key objective is to devise a mechanism to support semantic interoperability in hybrid tools for multi-modal modelling in a plurality of paradigms, all within one system. We propose FaCIL, a framework for such hybrid modelling tools. We design and realise the framework FaCIL, which maps UML, ER and ORM2 into a common metamodel with rules that provide the central point for management among the models and that links to the formalisation and logic-based automated reasoning. FaCIL supports the ability to represent models in different formats while preserving their semantics, and several editing workflows are supported within the framework. It has a clear separation of concerns for typical conceptual modelling activities in an interoperable and extensible way. FaCIL structures and facilitates the interaction between visual and textual conceptual models, their formal specifications, and abstractions as well as tracking and propagating updates across all the representations. FaCIL is compared against the requirements, implemented in crowd 2.0, and assessed with a use case. The proof-of-concept implementation in the web-based modelling tool crowd 2.0 demonstrates its viability. The framework also meets the requirements and fully supports the use case
    corecore