684 research outputs found

    Semantically defined Analytics for Industrial Equipment Diagnostics

    Get PDF
    In this age of digitalization, industries everywhere accumulate massive amount of data such that it has become the lifeblood of the global economy. This data may come from various heterogeneous systems, equipment, components, sensors, systems and applications in many varieties (diversity of sources), velocities (high rate of changes) and volumes (sheer data size). Despite significant advances in the ability to collect, store, manage and filter data, the real value lies in the analytics. Raw data is meaningless, unless it is properly processed to actionable (business) insights. Those that know how to harness data effectively, have a decisive competitive advantage, through raising performance by making faster and smart decisions, improving short and long-term strategic planning, offering more user-centric products and services and fostering innovation. Two distinct paradigms in practice can be discerned within the field of analytics: semantic-driven (deductive) and data-driven (inductive). The first emphasizes logic as a way of representing the domain knowledge encoded in rules or ontologies and are often carefully curated and maintained. However, these models are often highly complex, and require intensive knowledge processing capabilities. Data-driven analytics employ machine learning (ML) to directly learn a model from the data with minimal human intervention. However, these models are tuned to trained data and context, making it difficult to adapt. Industries today that want to create value from data must master these paradigms in combination. However, there is great need in data analytics to seamlessly combine semantic-driven and data-driven processing techniques in an efficient and scalable architecture that allows extracting actionable insights from an extreme variety of data. In this thesis, we address these needs by providing: • A unified representation of domain-specific and analytical semantics, in form of ontology models called TechOnto Ontology Stack. It is highly expressive, platform-independent formalism to capture conceptual semantics of industrial systems such as technical system hierarchies, component partonomies etc and its analytical functional semantics. • A new ontology language Semantically defined Analytical Language (SAL) on top of the ontology model that extends existing DatalogMTL (a Horn fragment of Metric Temporal Logic) with analytical functions as first class citizens. • A method to generate semantic workflows using our SAL language. It helps in authoring, reusing and maintaining complex analytical tasks and workflows in an abstract fashion. • A multi-layer architecture that fuses knowledge- and data-driven analytics into a federated and distributed solution. To our knowledge, the work in this thesis is one of the first works to introduce and investigate the use of the semantically defined analytics in an ontology-based data access setting for industrial analytical applications. The reason behind focusing our work and evaluation on industrial data is due to (i) the adoption of semantic technology by the industries in general, and (ii) the common need in literature and in practice to allow domain expertise to drive the data analytics on semantically interoperable sources, while still harnessing the power of analytics to enable real-time data insights. Given the evaluation results of three use-case studies, our approach surpass state-of-the-art approaches for most application scenarios.Im Zeitalter der Digitalisierung sammeln die Industrien überall massive Daten-mengen, die zum Lebenselixier der Weltwirtschaft geworden sind. Diese Daten können aus verschiedenen heterogenen Systemen, Geräten, Komponenten, Sensoren, Systemen und Anwendungen in vielen Varianten (Vielfalt der Quellen), Geschwindigkeiten (hohe Änderungsrate) und Volumina (reine Datengröße) stammen. Trotz erheblicher Fortschritte in der Fähigkeit, Daten zu sammeln, zu speichern, zu verwalten und zu filtern, liegt der eigentliche Wert in der Analytik. Rohdaten sind bedeutungslos, es sei denn, sie werden ordnungsgemäß zu verwertbaren (Geschäfts-)Erkenntnissen verarbeitet. Wer weiß, wie man Daten effektiv nutzt, hat einen entscheidenden Wettbewerbsvorteil, indem er die Leistung steigert, indem er schnellere und intelligentere Entscheidungen trifft, die kurz- und langfristige strategische Planung verbessert, mehr benutzerorientierte Produkte und Dienstleistungen anbietet und Innovationen fördert. In der Praxis lassen sich im Bereich der Analytik zwei unterschiedliche Paradigmen unterscheiden: semantisch (deduktiv) und Daten getrieben (induktiv). Die erste betont die Logik als eine Möglichkeit, das in Regeln oder Ontologien kodierte Domänen-wissen darzustellen, und wird oft sorgfältig kuratiert und gepflegt. Diese Modelle sind jedoch oft sehr komplex und erfordern eine intensive Wissensverarbeitung. Datengesteuerte Analysen verwenden maschinelles Lernen (ML), um mit minimalem menschlichen Eingriff direkt ein Modell aus den Daten zu lernen. Diese Modelle sind jedoch auf trainierte Daten und Kontext abgestimmt, was die Anpassung erschwert. Branchen, die heute Wert aus Daten schaffen wollen, müssen diese Paradigmen in Kombination meistern. Es besteht jedoch ein großer Bedarf in der Daten-analytik, semantisch und datengesteuerte Verarbeitungstechniken nahtlos in einer effizienten und skalierbaren Architektur zu kombinieren, die es ermöglicht, aus einer extremen Datenvielfalt verwertbare Erkenntnisse zu gewinnen. In dieser Arbeit, die wir auf diese Bedürfnisse durch die Bereitstellung: • Eine einheitliche Darstellung der Domänen-spezifischen und analytischen Semantik in Form von Ontologie Modellen, genannt TechOnto Ontology Stack. Es ist ein hoch-expressiver, plattformunabhängiger Formalismus, die konzeptionelle Semantik industrieller Systeme wie technischer Systemhierarchien, Komponenten-partonomien usw. und deren analytische funktionale Semantik zu erfassen. • Eine neue Ontologie-Sprache Semantically defined Analytical Language (SAL) auf Basis des Ontologie-Modells das bestehende DatalogMTL (ein Horn fragment der metrischen temporären Logik) um analytische Funktionen als erstklassige Bürger erweitert. • Eine Methode zur Erzeugung semantischer workflows mit unserer SAL-Sprache. Es hilft bei der Erstellung, Wiederverwendung und Wartung komplexer analytischer Aufgaben und workflows auf abstrakte Weise. • Eine mehrschichtige Architektur, die Wissens- und datengesteuerte Analysen zu einer föderierten und verteilten Lösung verschmilzt. Nach unserem Wissen, die Arbeit in dieser Arbeit ist eines der ersten Werke zur Einführung und Untersuchung der Verwendung der semantisch definierten Analytik in einer Ontologie-basierten Datenzugriff Einstellung für industrielle analytische Anwendungen. Der Grund für die Fokussierung unserer Arbeit und Evaluierung auf industrielle Daten ist auf (i) die Übernahme semantischer Technologien durch die Industrie im Allgemeinen und (ii) den gemeinsamen Bedarf in der Literatur und in der Praxis zurückzuführen, der es der Fachkompetenz ermöglicht, die Datenanalyse auf semantisch inter-operablen Quellen voranzutreiben, und nutzen gleichzeitig die Leistungsfähigkeit der Analytik, um Echtzeit-Daten-einblicke zu ermöglichen. Aufgrund der Evaluierungsergebnisse von drei Anwendungsfällen Übertritt unser Ansatz für die meisten Anwendungsszenarien Modernste Ansätze

    Towards decentralised job shop scheduling as a web service

    Get PDF
    This paper aims to investigate the fundamental requirements for a cloud-based scheduling service for manufacturing, notably manufacturer priority to scheduling service, resolution of schedule conflict, and error-proof data entry. A flow chart of an inference-based system for manufacturing scheduling is proposed and a prototype was designed using semantic web technologies. An adapted version of the Muth and Thompson 10 × 10 scheduling problem (MT10) was used as a case study and two manufacturing companies represented our use cases. Using Microsoft Project, levelled manufacturer operation plans were generated. Semantic rules were proposed for constraints calculation, scheduling and verification. Pellet semantic reasoner was used to apply those rules onto the case study. The results include two main findings. First, our system effectively detected conflicts when subjected to four types of disturbances. Secondly, suggestions of conflict resolutions were effective when implemented albeit they were not efficient. Consequently, our two hypotheses were accepted which gave merit for future works intended to develop scheduling as a web service. Future works will include three phases: (1) migration of our system to a graph database server, (2) a multi-agent system to automate conflict resolution and data entry, and (3) an optimisation mechanism for manufacturer prioritisation to scheduling services

    Semantic-guided predictive modeling and relational learning within industrial knowledge graphs

    Get PDF
    The ubiquitous availability of data in today’s manufacturing environments, mainly driven by the extended usage of software and built-in sensing capabilities in automation systems, enables companies to embrace more advanced predictive modeling and analysis in order to optimize processes and usage of equipment. While the potential insight gained from such analysis is high, it often remains untapped, since integration and analysis of data silos from different production domains requires high manual effort and is therefore not economic. Addressing these challenges, digital representations of production equipment, so-called digital twins, have emerged leading the way to semantic interoperability across systems in different domains. From a data modeling point of view, digital twins can be seen as industrial knowledge graphs, which are used as semantic backbone of manufacturing software systems and data analytics. Due to the prevalent historically grown and scattered manufacturing software system landscape that is comprising of numerous proprietary information models, data sources are highly heterogeneous. Therefore, there is an increasing need for semi-automatic support in data modeling, enabling end-user engineers to model their domain and maintain a unified semantic knowledge graph across the company. Once the data modeling and integration is done, further challenges arise, since there has been little research on how knowledge graphs can contribute to the simplification and abstraction of statistical analysis and predictive modeling, especially in manufacturing. In this thesis, new approaches for modeling and maintaining industrial knowledge graphs with focus on the application of statistical models are presented. First, concerning data modeling, we discuss requirements from several existing standard information models and analytic use cases in the manufacturing and automation system domains and derive a fragment of the OWL 2 language that is expressive enough to cover the required semantics for a broad range of use cases. The prototypical implementation enables domain end-users, i.e. engineers, to extend the basis ontology model with intuitive semantics. Furthermore it supports efficient reasoning and constraint checking via translation to rule-based representations. Based on these models, we propose an architecture for the end-user facilitated application of statistical models using ontological concepts and ontology-based data access paradigms. In addition to that we present an approach for domain knowledge-driven preparation of predictive models in terms of feature selection and show how schema-level reasoning in the OWL 2 language can be employed for this task within knowledge graphs of industrial automation systems. A production cycle time prediction model in an example application scenario serves as a proof of concept and demonstrates that axiomatized domain knowledge about features can give competitive performance compared to purely data-driven ones. In the case of high-dimensional data with small sample size, we show that graph kernels of domain ontologies can provide additional information on the degree of variable dependence. Furthermore, a special application of feature selection in graph-structured data is presented and we develop a method that allows to incorporate domain constraints derived from meta-paths in knowledge graphs in a branch-and-bound pattern enumeration algorithm. Lastly, we discuss maintenance of facts in large-scale industrial knowledge graphs focused on latent variable models for the automated population and completion of missing facts. State-of-the art approaches can not deal with time-series data in form of events that naturally occur in industrial applications. Therefore we present an extension of learning knowledge graph embeddings in conjunction with data in form of event logs. Finally, we design several use case scenarios of missing information and evaluate our embedding approach on data coming from a real-world factory environment. We draw the conclusion that industrial knowledge graphs are a powerful tool that can be used by end-users in the manufacturing domain for data modeling and model validation. They are especially suitable in terms of the facilitated application of statistical models in conjunction with background domain knowledge by providing information about features upfront. Furthermore, relational learning approaches showed great potential to semi-automatically infer missing facts and provide recommendations to production operators on how to keep stored facts in synch with the real world

    Digitalisaation hyödyt höyryturbiinien käyttöomaisuuden hallinnassa

    Get PDF
    Steam turbines are considered long-lived and require little attention during normal operation. Cost optimizations due to infrequent demand for turbine expertise, together with retiring workforce, have resulted in increasing shortage of know-how. Digitalization could substitute unavailable turbine resources, but the projects and investment have been challenging to initiate and incentivize. The objective of this thesis was to map the benefits of digitalized steam turbine asset management, what kind of challenges digitalization could mitigate and how the implementation could be facilitated. The research confirmed that turbine operating companies lack the domain know-how and resources required for some current systems and demands. Prolonging of overhauls and deficiencies in asset management, such as insufficient documenting and data utilization, were observed to be other main challenges. Increased downtime and unoptimized practices and systems reduce efficiency, usability, reliability and availability. Advanced diagnostics in condition monitoring systems could increase availability and reliability by enabling optimized condition-based maintenance and facilitate shorter overhauls by reducing unforeseen findings. Solutions and service that allow faster fact-finding in anomalies would increase availability as well. Asset management systems with more connectivity, centralization, user-friendliness and AI would reduce downtime by enhancing planning, documenting and spare part management. Such systems could also increase usability and the overall efficiency of operations and maintenance. Main hindrances for digitalization are the imbalance between costs and perceived added value, and insufficient focus on the usability of asset management systems. Development of advanced solutions in current business models is disincentivized. Long-term contracts could enable the implementation of best practices, reduce risks and incentivize higher quality of services. Partnership business models facilitate mutual benefits better than short-term and stand-alone services.Höyryturbiinit ovat yleensä pitkäikäisiä ja vaativat vain vähän huomiota normaalin käynnin aikana. Huollon ja turbiiniosaamisen harvan tarpeen takia höyryturbiinien käyttö- ja hallintakustannuksista on jatkuvasti säästetty. Tämä, yhdessä eläköityvän työvoiman kanssa, on johtanut krooniseen tietotaidon puutteeseen höyryturbiinilaitoksilla. Digitalisaatiolla voisi korvata puuttuvia resursseja, mutta projektien ja investointien kanssa on ollut ongelmia. Tämän diplomityön tarkoituksena oli kartoittaa höyryturbiinien digitaalisen käyttöomaisuuden hallinnan hyötyjä, millaisiin haasteisiin se voisi vastata, ja mitä digitalisaation hyötyjen menestyksekäs implementointi vaatisi. Tutkimus varmisti, että loppukäyttäjillä on puutetta turbiiniosaamisesta ja -resursseista, joita tämänhetkiset systeemit ja tarpeet vaatisivat. Muita suuria haasteita olivat huoltojen pitkittymiset ja puutteet käyttöomaisuuden hallinnassa, kuten riittämätön dokumentointi ja mitatun datan hyödyntäminen. Pitkittyvät huollot ja optimoimattomat toiminnot kasvattavat epäkäytettävyysaikaa ja pienentävät tehokkuutta, käytön helppoutta ja luotettavuutta. Kehittyneen turbiinidiagnostiikan hyödyntäminen kunnonvalvonnassa voisi kasvattaa käytettävyyttä ja luotettavuutta mahdollistamalla turbiinin todelliseen huoltotarpeeseen perustuvan huollon. Ennakoivalla analytiikalla voitaisiin vähentää odottamattomia löydöksiä, jotka ovat yksi yleisin syy huoltojen pitkittymiseen. Käytettävyyttä lisäisivät myös ratkaisut ja palvelut, joilla nopeutettaisiin ongelmanratkaisua häiriötilanteissa. Käyttö- ja kunnossapitojärjestelmien parempi liitettävyys, keskitettävyys ja käyttäjäystävällisyys sekä tekoälyn hyödyntäminen tehostaisivat käyttöomaisuuden, kuten varaosien, hallintaa ja helpottaisivat suunnittelua ja dokumentointia. Merkittävimpiä esteitä turbiinien käyttöomaisuuden hallinnan digitalisaatiolle ovat kustannusten ja hahmotetun lisäarvon epätasapaino sekä riittämätön huomio käyttöomaisuuden hallinnan optimointiin. Edistyksellisten digitaalisten ratkaisujen kehittämiselle ja loppuasiakkaalle tarjoamiseen ei ole riittävästi kannustimia. Pitkäaikaiset ylläpitosopimukset voisivat mahdollistaa parhaiden käytäntöjen implementoinnin, vähentää liiketoimien riskiä ja tehdä korkeimmankin laadun palveluista ja ratkaisuista kannattavampia. Pitkäaikaiseen kumppanuuteen perustuvat liiketoimintamallit fasilitoivat osapuolten yhteisiä etuja paremmin, kuin lyhytaikaiset erillissopimukset yksittäisille ratkaisuille ja palveluille

    A framework development to predict remaining useful life of a gas turbine mechanical component

    Get PDF
    Power-by-the-hour is a performance based offering for delivering outstanding service to operators of civil aviation aircraft. Operators need to guarantee to minimise downtime, reduce service cost and ensure value for money which requires an innovative advanced technology for predictive maintenance. Predictability, availability and reliability of the engine offers better service for operators, and the need to estimate the expected component failure prior to failure occurrence requires a proactive approach to predict the remaining useful life of components within an assembly. This research offers a framework for component remaining useful life prediction using assembly level data. The thesis presents a critical analysis on literature identifying the Weibull method, statistical technique and data-driven methodology relating to remaining useful life prediction, which are used in this research. The AS-IS practice captures relevant information based on the investigation conducted in the aerospace industry. The analysis of maintenance cycles relates to the examination of high-level events for engine availability, whereby more communications with industry showcase a through-life performance timeline visualisation. Overhaul sequence and activities are presented to gain insights of the timeline visualisation. The thesis covers the framework development and application to gas turbine single stage assembly, repair and replacement of components in single stage assembly, and multiple stage assembly. The framework is demonstrated in aerospace engines and power generation engines. The framework developed enables and supports domain experts to quickly respond to, and prepare for maintenance and on-time delivery of spare parts. The results of the framework show the probability of failure based on a pair of error values using the corresponding Scale and Shape parameters. The probability of failure is transformed into the remaining useful life depicting a typical Weibull distribution. The resulting Weibull curves developed with three scenarios of the case shows there are components renewals, therefore, the remaining useful life of the components are established. The framework is validated and verified through a case study with three scenarios and also through expert judgement

    Towards the next generation of smart grids: semantic and holonic multi-agent management of distributed energy resources

    Get PDF
    The energy landscape is experiencing accelerating change; centralized energy systems are being decarbonized, and transitioning towards distributed energy systems, facilitated by advances in power system management and information and communication technologies. This paper elaborates on these generations of energy systems by critically reviewing relevant authoritative literature. This includes a discussion of modern concepts such as ‘smart grid’, ‘microgrid’, ‘virtual power plant’ and ‘multi-energy system’, and the relationships between them, as well as the trends towards distributed intelligence and interoperability. Each of these emerging urban energy concepts holds merit when applied within a centralized grid paradigm, but very little research applies these approaches within the emerging energy landscape typified by a high penetration of distributed energy resources, prosumers (consumers and producers), interoperability, and big data. Given the ongoing boom in these fields, this will lead to new challenges and opportunities as the status-quo of energy systems changes dramatically. We argue that a new generation of holonic energy systems is required to orchestrate the interplay between these dense, diverse and distributed energy components. The paper therefore contributes a description of holonic energy systems and the implicit research required towards sustainability and resilience in the imminent energy landscape. This promotes the systemic features of autonomy, belonging, connectivity, diversity and emergence, and balances global and local system objectives, through adaptive control topologies and demand responsive energy management. Future research avenues are identified to support this transition regarding interoperability, secure distributed control and a system of systems approach

    Current state and requirements in components and energy systems databases

    Get PDF
    With the objective to develop a suitable database for the Design4Energy (D4E) workspace, the requirement identification of the component and energy system database started from the analysis of the existing database solutions. The classification, evaluation and analysis of the state of the art of the BIM and energy efficiency oriented database have inspired the requirement identification and also the approach, concept and functionalities design in T3.2. This document then identifies the major related stakeholders of the envisioned platform and project outputs. Taking into account the project objectives and the interests of the analysed stakeholders, this report brings the requirement for simulation outputs which could help the end users or architects to understand the energetic performance of their on-going design, IT requirements in architecture, data structure and interface, as well as the operation and maintenance issues. As another main focus of this document, components and energy systems database (DB) are detailed described. It defines and recommends the parameters for different building components such as wall, roof, floor, windows and doors, lighting system, renewable energy system and HVAC components such as heat pump, boiler, energy storage and distribution. During the research of the database requirement, interviews, questionnaire, literature review, internal discussions with partners and energy experts, investigation of the simulation software and BIM technologies have been the main data sources. The key information presented within this document can be summarised as follows: · Objectives and vision of the component and energy system database. · Analysis of existing database solutions. By classifying the current practices into three categories: construction material database, component database and others such as building type database, different technologies and platforms are analysed. · Identification and analysis of the major stakeholders related to the D4E scope. · Questionnaire design and the collected results · Database requirement in system architecture, interoperability, data structure, user interface and user management. · Database requirement description of the simulation outputs, specifying the interesting data which could help the end users to understand their on-going building design. · Database requirement description of the operation and maintenance related issues. · Database requirement description of building components, including envelope (wall, covers/roof, floor), window and door. The recommended parameters are given in table format. · Database requirement description of energy systems, focusing on the subcategories like lighting system, renewable energy, heat pump, boiler, energy storage and distribution, in each subcategory, requirements for specific technologies are described. Introduction of the strengths and weaknesses of the latest and popular technologies is also included in appendices

    Computer Science & Technology Series : XXI Argentine Congress of Computer Science. Selected papers

    Get PDF
    CACIC’15 was the 21thCongress in the CACIC series. It was organized by the School of Technology at the UNNOBA (North-West of Buenos Aires National University) in Junín, Buenos Aires. The Congress included 13 Workshops with 131 accepted papers, 4 Conferences, 2 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 6 courses. CACIC 2015 was organized following the traditional Congress format, with 13 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of 3-5 chairs of different Universities. The call for papers attracted a total of 202 submissions. An average of 2.5 review reports werecollected for each paper, for a grand total of 495 review reports that involved about 191 different reviewers. A total of 131 full papers, involving 404 authors and 75 Universities, were accepted and 24 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI
    corecore